How to Use SEC Filings in Market Intelligence – A Case Study for Google Cloud

The Technical Infrastructure team at Google Cloud recently came to our school with a question about the scale of its biggest competitor, Amazon Web Services (AWS).

To summarize, Amazon recently swapped out the depreciation schedule of its servers from 3 years to 4 years, straight line. This is a big deal since extending a depreciation schedule means significantly less reported expenses each quarter while that change takes effect. Specifically, they reported a difference of $800 million for the next quarter alone.

It turns out you could actually work out a lot about Amazon’s server spending by just crunching some numbers over the past 7 years. In short, you could estimate the portion of capital expenditures allocated to servers by calculating what spending schedule would sum to the $800 million figure (assuming that the depreciation change was implemented) and work backwards. Sounds complicated, but it’s easier than you think.

So here’s the $800 million accounting question: From that figure and other publicly available data (SEC filings, earnings statements, industry reports, etc.), would it be possible to reverse engineer how much Amazon spends on their servers, and thus, get an idea of how many they currently have in their fleet?

This problem was the impetus for our school hosting a 6-hour hackathon with Google to see who could come up with the best answer. We eventually took home first prize (go team Lime Green)! Here’s what we did.

Theory & Background

Why does Google want this info? A good estimate of AWS’s scale could help a competitor understand where they stand in terms of theoretical capacity (not just market share), how they compare in spending, and make predictions of an approximate rate of expansion through a time series.

In turn, AWS is fairly secretive about exactly how many servers they have and where their data centers are even located. I mean, why wouldn’t they be?

Despite this, we can derive a pretty good estimate using figures from their SEC reports and, crucially, that figure they released in their most recent earnings call (the $800 million effect). If we set that as a constant, we can do some basic Excel math to work backward and approximate each quarter’s spending on servers. From there, we can estimate the server fleet size by dividing that spending by approximate cost-per-server adjusted by year (server costs have changed a lot since 2014).

It’s far from perfect, but it’s not a bad starting point for uncovering market intelligence intended to be kept hidden using data that’s widely available. Here’s the idea:

  1. Using past 10-Q reports, scrape Amazon’s capital expenditures (CapEx) for the past 3 years/12 quarters (those affected by the accounting change). The idea here is that server spending will fall under CapEx, but the breadth of that category usually “hides” it with other expenses.
  2. Calculate how much each quarter’s depreciation expense would be affected by the change in schedule. e.g. if it was purchased just last quarter, it would change from the total 3 to 4 years remaining or 12 to 16 quarters. If two quarters ago, then it would change from 11 to 15 quarters, and so on.
  3. Make an assumption about relative spending. There’s too many unknowns since we don’t know what portion was spent on servers within each quarter. The simplest is to assume that the percent CapEx allocated to servers is constant. There are other possibilities, though adjusting this parameter doesn’t turn out to be that important compared to the other assumptions.
  4. Finally, determine a moving estimate of the average server cost for a firm like Amazon based on various assumptions (how long servers last, OEM server manufacturer averages, how long it takes to expense them, potential fixed costs, where/how AWS servers are manufactured, etc) and market trends adjusted for each year. Divide and get an estimate for server fleet size. Done!

Writeup

Okay, let’s see it in action. What follows is a slightly edited version of the deliverable that won us the hackathon. If you want to follow the modeling aspect, you’ll also need this excel sheet (100 downloads ) .

We only had six hours to write in the actual competition, but we had the chance to refine some things afterward in preparation for a presentation with the Google Cloud team. Thus, there’s also extra considerations like subtracting the cost of server racks and adjusting for DRAM prices (the main cost-driver of servers):

Estimating the Scale of Amazon’s Server Operation

A Market Intelligence Report based on SEC Filings, Earnings Statements, Industry Reports, and Other Public Information


Submitted for Consideration in the 2020 MQM Forensics Hackathon

Team Lime Green – Shay Xueyao Fu, Shangyun Song, Yingli Sun, and Alex Zhou

What We Know


Amazon recently extended the depreciation schedule of its servers from a straight-line 3 years schedule to 4 years, which will result in an $800 million decreased depreciation expense. Amazon’s total quarterly capital expenditures are made publicly available from SEC filings. AWS currently accounts for about 71% of Amazon’s operating profit. 

IDC-published results put ODM servers at an average cost of $6,486.55 each for Q3 of 2019, which are about in line with recent averages for x86 server selling prices. AWS uses custom-made white box systems (likely designed in-house and produced by an ODM) and is even building a repertoire of custom silicon for servers, likely driving unit cost down. From the same published results, we can obtain the total revenue and market share of key players and ODM direct sources in the server market for each quarter.

What We’ll Assume


Since we cannot isolate the relative CapEx set aside for servers in each quarter by Amazon based on SEC filings, we assumed two possible spending schedules: a constant-rate schedule and a market-adjusted schedule. The constant schedule makes the simplest assumption that Amazon’s server spending does not change much year-over-year.

The market-adjusted schedule uses trends in ODM server revenue by quarter and adjusts Amazon’s predicted spending based on this growth, as well as considering the growth rate of AWS availability zones and the change in DRAM pricing. Additionally, we subtract the cost of racks and network switches from the CapEx in both schedules when we calculate the number of servers.

While this assumption is not perfect and Amazon’s spending could differ from market trends, this helps to account for explosive growth in the ODM server market (which is driven not in small part by AWS). The ODM market’s expansion, combined with nonlinear revenue growth for AWS in recent years, gives us reason to challenge a constant percent assumption for Amazon’s server spending. We provide estimates based on both assumptions in our appendix.

What We Found


Using the $800 million decrease, we estimate Amazon’s percent CapEx spent on servers to be 53.03% based on the constant-rate schedule and 47.34% from the market-adjusted schedule. Over the past 6 years, we estimate Amazon’s server spending to total $28.23 billion according to the constant-rate and $25.20 billion with the market-adjusted rate. Adjusting for average ODM server prices starting from 2014 and assuming an average useful life of 7 years and using both a floating/constant server price, Amazon currently has approximately 4.9 to 5.5 million servers in operation. This is in line with another estimate we produced based on the approximate amount of servers per data center per availability zone.

Appendix

Percent of Amazon’s (and/or AWS) CapEx spent on servers

We created two excel models and used Solver to find the base percent of Amazon’s CapEx going towards servers. CapEx was set as the “purchases of property and equipment, including internal-use software and website development, net” from each quarterly and annual filings’ consolidated statements of cash flows.

The first model was based on the Amazon quarterly CapEx spending which can be found from Amazon websites under investor relation. The second model considered the market adjusted schedule taken from the total revenue of the ODM server market from IDC, as well as considering the growth rate of AWS availability zones and the change in DRAM pricing.

Reference used:

Fixed percent of CapEx on servers:

Market-adjusted floating percent of CapEx on Servers:

How Much Amazon has Spent on Servers over the Last 6 years

We used the base rate calculated from question 1 multiplied by the yearly CapEx spending found from Amazon 10k.

Reference used:

Number of Servers AWS Currently Has in Their Fleet

We must assume servers last between 5-8 years (competition guidelines). We’ll pick the higher end of this scale due to Amazon’s lengthy tenure in the server market and simplify calculations by choosing a constant server lifetime of 7 years. This means all servers purchased from 2014 (and no earlier) should still be in operation today.

We used average ODM server prices from each year starting from 2014 to estimate the cost Amazon paid to their manufacturers. We also considered adjusting the CapEx spending for network switches and racks to calculate the number of servers.

Another way to estimate server count is to use the number of server availability zones (69) and then approximate the number of datacenters per availability zone and the number of servers per datacenter. Estimates given at a conference by AWS engineer James Hamilton place the range of servers per data center at 50,000-80,000, and the number of centers per availability zone somewhere between 1-6. We still need to consider the cost of racks and network switches that are recorded as servers. From these ranges and educated guesses from the article, we can determine a current server count using the number of availability zones.

69[1 to 6 DCs/AZ][50000 to 65000 Servers/DC] = 3.4  million to 26.9 million servers

This upper bound of the base estimate is likely high since newly established AZs should have fewer average data centers than the ones referenced when the article was published. Each data center should also have fewer than the maximum number of servers as they ramp up. Likewise, our final estimate based on the financials resulted in 4.9 to 5.5 million servers, depending on whether or not we adjusted for ODM server prices in the market and other uncertain factors.

The calculation can be found at the screenshot attached to question 2 and our Excel workbook. 

References Used:

Calculating the Nutritional Content of Popular Online Recipes

I just started business school at Duke University two months ago, and it’s been amazing! I feel like I’ve already made lifelong friends, and there are lots of great events to kick things off with the business school as well as fun things to do in the Raleigh-Durham area.

Our program, the Master of Quantitative Management (MQM), recently hosted its Summer Data Competition. The basic idea was to produce an interesting data set (ostensibly not including any insights taken from it) using any means available. We’d be judged on criteria like originality, cleverness, and usability/potential for insights – of course, demonstrating that potential means performing at least some analysis yourself…

An entry I made with my friend Mayank ended up making it into the finals. I thought the idea was really cool. Here’s what we did:

Premise

“Pick two.

Like many students, I’ve been trying to maintain a good diet on a low budget, and I’ve come to notice a basic, inescapable dilemma for all eaters. Basically, you can eat cheaply, eat healthily, or eat out. Pick two. Students/early career folks like me generally end up sacrificing the convenience and time savings of having someone else make our meals in favor of cost savings.

If we’re lucky, we also get to maintain our overall health. It’s obviously not guaranteed you even get two of them. The broke college student chowing down on instant ramen every night is a cliché for a reason.

There are a plethora of reasons as to why it could be difficult to cook healthy meals for yourself all the time, especially when you’re low on ingredients, money, or you have to follow some specific diet or nutritional guidelines. But sometimes, it’s just because it isn’t obvious which recipes are healthy or not just by looking at the ingredient list. You might notice that the vast majority of recipes don’t include nutrition facts, and the ones that do have narrow selections and mostly include health-first, taste-second recipes. That’s no good.

A lack of easily accessible basic nutritional information for common recipes should never be a reason to sacrifice your health. We thought that, with some simple data transformations, it would be possible to scrape nutritional information for recipes online.

Introduction

Our dataset focuses on the nutritional profiles of publicly available food and drink recipes on various popular culinary websites; we chose to focus on US-based recipe catalogues to avoid language confusion and to ensure a stronger cultural grasp of the recipes we analyzed.

The record is arranged into 2976 rows and 19 columns, with each row corresponding to a given recipe entry. Five columns are reserved for recipe metadata (e.g. title, average rating, url, etc), and the remaining are nutrition based. We used the USDA Food Composition Databases API to access nutritional information for each ingredient, then applied natural language processing techniques to parse the units of measurement according to each recipe – think pounds, cups, teaspoons, or grams – and converted each to a mass standard that the API could retrieve.

Data Acquisition

While the data acquisition process was relatively straightforward in principle, our team had to overcome significant technical hurdles to obtain the recipe data and convert it to useful nutritional information.

First, we needed to design a web crawler that found directories in target websites that matched a particular signature pointing only to recipe pages. After tinkering for a while, we found that most of the sites we tested had a “recipe” tag in their url path that made this distinction obvious. We used dirhunt, a command-line open source site analyzer that attempts to compile all directories while minimizing requests to the server (Nekmo).

Here’s what “dirhunt” looks like in action. There’s a lot of blog posts/stories we don’t want, but we can filter out the second to last URL sections that include “recipe” to get actual recipes we can use!

Next, we needed to scrape the data from each recipe URL. We ended up using recipe-scrapers, an open-source Python package for gathering basic data from popular recipe site formats (hhursev). This package gave us easy access to the recipe titles, average ratings, and ingredient lists, among other important data.

Critically, the ingredients were formatted as a Python list of strings in their raw delineation. For instance, one item could look like “1 1/2 cups unbleached white flour”. We needed to first convert the “1 1/2” into a proper floating point, as well as change all measurements into the standard grams that the USDA nutritional database requires. Python offers a “fraction” package for converting strings of fractions into floating point numbers, as well as a “word2number” package for converting strings to numbers (e.g. “three” to 3).

We wrote a lookup table for converting all masses into grams, as well as all volumes into grams based on the ingredient type. For volume-based ingredients not found in our lookup table, the script defaulted to using the conversion factor for water (approx. 240 grams per cup), which proved to be a close estimate for a wide range of food types – most food is mostly water!

Finally, we used the USDA Food Composition Databases API to search for these ingredients and obtain nutritional data. The API allows for searching with natural language, though some foods were still impossible to find through the API; we decided to discard any recipes that had untraceable ingredients given the time restrictions of the competition.

The request limit on this API also meant that we were realistically limited to a few hundred recipes per site for our final dataset; we decided to spread relatively evenly over the sites to include a wide range of recipe influences.

Dataset Description

Recipes-Meta is a database of recipes scraped from popular websites with computed detailed nutrition data taken from USDA for each ingredient to potentially help consumers make more informed eating choices and offer insights in the relationships between ingredients, nutrients, and site visitor opinions. Each row is a recipe entry that can be uniquely referenced by its URL.

Columns:

Title: Name of recipe 
Rating: Average rating of recipe as given by users (some sites do not have this feature)
URL: Web address of the recipe (unique/primary key)
Servings: Number of people the recipe serves, i.e. serving size (nutrition data is divided by this)
Ingredients: List of ingredients in the recipe
Energy (kcal): Total calories of the recipe per serving in kcal
Carbohydrate, by difference (g): Total carbohydrates of the recipe per serving in g
Protein (g):
Total protein of the recipe per serving in g
Calcium, Ca (mg):
Total calcium of the recipe per serving in mg
Cholesterol (mg):
Total cholesterol in the recipe per serving in mg
Fatty acids, total saturated (g):
Total saturated fat of the recipe per serving in g
Fatty acids, total trans (g):
Total trans fats of the recipe per serving in g
Fiber, total dietary (g):
Total dietary fibre of the recipe per serving in g
Iron, Fe (mg):
Total iron content of ingredients used in the recipe per serving in mg
Sodium, Na (mg):
Total sodium of ingredients used in the recipe per serving in mg
Sugars, total (g):
Total sugar content of recipe per serving in g
Total lipid (fat) (g):
Total lipids/fats of the recipe per serving in g
Vitamin A, IU (IU): Total Vitamins A of the recipe per serving in IU
Vitamin C, total ascorbic acid (mg):
Total Vitamin C of the recipe per serving in mg

  • Red indicates nutrition related data
  • Blue indicates recipe related recipe related data

Potential Insights

There exists a critical gap between growing consumer demand for health-conscious eating options and readily available nutrition data for recipes online. Most consumers looking to eat balanced, tasty, and affordable meals while meeting their health goals must eventually learn to cook their own meals. However, convenient data to make informed choices for recipe-based meal planning does not exist for most popular recipe sources online.

We also noticed that the few websites that do show nutrition data for their recipes are geared towards consumers that already follow a diet plan or practice healthy eating as a part of their lifestyle. Further, these websites are often limited in scope, including only a small set of specific recipe choices or community-generated recipes from a small user base.

Considering that access to healthy eating options and food education in America is growing increasingly unequal, our approach to spreading awareness about nutrition aims to target the ‘average eater’ or general public (Hiza et al.). This requires us to access nutrition data for a wide range of popular websites, rather than the few that readily offer this information. While our algorithm is not perfect, it can serve as a starting point and proof-of-concept for similar endeavours in the future.

We suggest the following potential insights, though there are many more viable routes for analysis:

  1. Determine if “healthiness”/nutrition stats somehow relate to the average rating of recipes. 
  2. Generate a custom list of recipes that fit a specific range of macronutrients (protein/carbs/fats).
  3. Define overall nutrition metrics in all recipes, for example, to find meals that have an especially high protein to calorie ratio.
  4. Check if recipes that include certain ingredients tend to be more or less healthy.
  5. Analyze which websites tend to post healthier and/or more well balanced recipes.
  6. Produce a nutritional ranking of all recipes according to their adherence to USDA guidelines (or some other metric).
  7. Flag recipes with especially high sugar, fat content, trans fat or cholesterol content and make this flag obvious if and when it is retrieved.
  8. Write an algorithm that generates customized eating plans that meet daily nutritional requirements.

For a more business oriented goal, this data could be integrated with personalised consumer-level data to design customized eating plans that follow individual nutritional requirements based on height, age, weight, BMI, or other factors. There are surely many interesting possibilities we did not discuss in this report. Happy hacking.

Sources

Hiza, Hazel A.B. et al. “Diet Quality of Americans Differs by Age, Sex, Race/Ethnicity, Income, and Education Level”. Journal of the Academy of Nutrition and Dietetics, Volume 113, Issue 2, 297 – 306.

hhursev. “recipe-scrapers – Python package for scraping recipes data”. Github Repository, 2019, https://github.com/hhursev/recipe-scrapers.

Nekmo. “Dirhunt – Find web directories without bruteforce”. Github Repository, 2019, https://github.com/Nekmo/dirhunt.

Recipe websites referenced:

https://cooking.nytimes.com/

https://allrecipes.com/

https://epicurious.com/

https://seriouseats.com/

Vocabulary Games


Hi there! Long time no see.

Let’s play a game. 

I’m going to give you all the vowels of a word, but none of the consonants. Instead of those, I’m putting empty spaces. The empty spaces are precise—if there’s only one space, there’s only one missing consonant. If two, then two. Then you’re going to guess which word I started with.

Here’s an example:

_ _ e _ e

There. What do you think it is?

Oops. I’ve already given it away. It’s the first word I used after showing you the puzzle. That’s the word I intended to be the solution, at least.

But you probably realized that a lot of other words could’ve worked too. You could’ve answered “where,” “scene,” “theme,” “these,” “crepe,” “abele,” or “prese.” All of those fit the vowel scheme I wrote down (some more possible answers here).

As a side note, “niece” or “sieve” would not have worked, since I would’ve had to show you the “i.” The link I just gave you also includes some of these false positives.

Let’s try a more difficult and interesting vowel scheme, which only has one common solution (a few, technically, but they all share the same root).

  1. _ eio _ i _ e _

Hope you like chemistry (all the answers are at the bottom, if you want to check).

There are some interesting properties to this game.

First, the amount of possible solutions to a given vowel scheme is pretty unpredictable. It follows the obvious pattern of more common vowels usually giving more possible combinations, but their placement matters too.

As a general rule, the simpler the scheme and the less specification, the more words can fit it, up to a point. Vowel schemes that include common combos like

o _ _ e (-orne, -ople, -ophe, -orse)

a _ e (-ane, -ace, -ale)

_ io _ (-tion, -cion, -sion)

also tend to have higher word counts.

In fact, one of the best vowel schemes I found for maximizing possible words is (note it includes a super common a _ e ):

_ a _ e _

Imagine capturing all the countless verbs that would fit the first four letters of that scheme and then effectively tripling that number (e.g. baked, bakes, baker). Then add all the other possibilities.

In cryptographic terms, every empty space adds about 20 more degrees of entropy (since y is usually used as a vowel). This isn’t quite a code, though, so the comparison isn’t great. Vowel scheme solutions always have to be actual words.

Increasing empty space is a good idea to increase the amount of combinations, but again, only up to a point. Few words have three consonants in a row unless the structure is designed to allow for them (coincidentally, the word “three” is part of one such structure) and even fewer have four in a row. Also, multi-letter combos generally have to follow a set of structures which, depending on the word, might end up giving less possibilities than just a single letter (e.g. “tr”, “ch”, “qu”, etc. for two letters).

So changing word length in general is unpredictable, unless you’re at an extreme low or high. Take, for example:

_ o

which can clearly only ever have up to 20 or 21 solutions for all the consonants and possibly ‘y’.

On the other extreme end, you have:

  1. _ e _ i _ e _ i _ e _ i _ ua _ e _

or

  1. _ _ o _ _ i _ a u _ i _ i _ i _ i _ i _ i _ i _ a _ i o _

Which are so long and convoluted that even without having any idea of the actual word, you can see they should clearly define only one solution (this time I’m sure of it).

But (and you guessed it) there are still exceptions. Some oddly long and specific designations can actually allow for way more words than you might expect. Take, for example:

  1. _ u _ _ i _ a _ io _

How many solutions can you find? Once you get one, the others follow a similar sort of pattern, and you’ll start to see why it supports so many words relative to other vowel schemes of its length.

I’m thinking that even a machine learning/natural language processing solution would have trouble predicting the amount of combinations a given vowel scheme will have. The structure of words feels too unpredictable and organic. I could totally be wrong and I still want to try, but that’s for another day.

Similar Words


The title of this post is vocabulary games. That’s plural. I’ve only got two, but I saved the best for last:

Try to find a word where simply switching one letter drastically changes the meaning. Bonus points for using longer words.

This doesn’t have that many interesting properties (granted, it’s not even really a game), but it can be pretty funny.

Attaching and attacking.

Altercation and alternation.

Clinginess and cringiness.

Heroes and herpes.

Morphine and morphing.

Artistic and autistic.

Revenge and revenue.

There’s a lot of these in English. Find your own.

OR you can write a program to find every pair of English words that are just a single letter apart. I did this, actually.

About a year ago, a friend of mine came up with this “game” and I wanted to take it to its logical end. It took a lot of lines of Python code and a long time to run. Recently, I revisited the project and tried to improve on it with all the programming knowledge I’ve gained over that year:

First, just for bragging rights, I can now do this in one line.

match_dict = {'length_%s_matches'%str(length):[comb for comb in itertools.combinations([w for w in [line.rstrip('\n') for line in open("words.txt", encoding="utf8")] if len(w) == length],2) if len(comb[0])-len([l1 for l1, l2 in zip(*comb) if l1==l2])==1] for length in [7,8,9,10,11,12,13,14,15,16,17,18,19,20]} 

This is not a readable, editable, or in any sense advisable way to write code. But when I started shortening it, I immediately wanted know know if this was possible. There you go. All the words would be saved into “match_dict” with the keys being “length_[7,8,9,etc..]_matches”.

Here’s a better method that has readable code:

words = [line.rstrip('\n') for line in open("words.txt", encoding="utf8")] #Removes the line dilineator \n while formatting it into a list called 'lines'
accepted_lengths = [7,8,9,10,11,12,13,14,15,16,17,18,19,20]

def match_finder(array):
    return [comb for comb in itertools.combinations(array,2) if len(comb[0])-len([l1 for l1, l2 in zip(*comb) if l1==l2])==1]

length_dict = {"length_%s_list"%str(length):[w for w in words if len(w) == length] for length in accepted_lengths}
match_dict = {'length_%s_matches'%str(length):match_finder(length_dict['length_%s_list'%str(length)]) for length in accepted_lengths}

And here’s one way to format it into a single file:

with open('Similar Words.txt','w') as similarwords:
    for length in accepted_lengths:
        similarwords.write('### Similar Words of Length %s ###\n\n'%length)
        for pair in match_dict['length_%s_matches'%length]:
            similarwords.write("%s and %s\n" %(pair[0].capitalize(),pair[1].capitalize()))
        similarwords.write('\n\n\n')

If you want to run it yourself, you’re also going to need a list complete with all 400000-odd English words. You can find one online pretty easily, but I got you covered.

Here are the results if you just want to look at those. There’s too much to sort through by myself, so have a look and let me know if you find anything good that I missed.

That’s all my games for now. Happy word-ing.

Answers


  1. Deionized, deionizes, deionizer (Bonus solution: Meionites).
  2. Hemidemisemiquaver (Semidemisemiquaver is an alternate spelling, but I don’t count it as unique).
  3. Floccinaucinihilipilification (Fun fact: this has the most “consonant + i groups” in a row of any word).
  4. Duplication, culmination, publication, lubrication, sublimation, etc.

Madelung – The Realest Abstraction

If you’ve done any physics work before, you might have noticed that the formulas tend to include a lot of constants: the speed of light, Planck’s constant, the Bohr magneton, electron/proton/neutron masses, and so on. It makes sense that we would need to use constants, since it would be pretty odd/coincidental if the relationships defined between real, physical quantities would be some pleasant numbers in our decimal system. Unlike in mathematics, physics constants are generally real things you have to measure and apply models to in order to calculate.

So, they’re not usually defined abstractly in the same way that pi or e are. Though there are still a few useful constants in physics that have abstract definitions like in pure mathematics. One of those constants is the Madelung constant—what I have fittingly dubbed “the realest abstraction”—and it’s pretty damn cool.

*Mostly known for his unethical treatment of cats.

The Madelung constant, named after Erwin Madelung (not to be confused with the other, more famous Erwin in physics)*, is used to determine the total amount of electrical potential for an ion in a given crystal lattice. If that sounds bloated, don’t worry—the exact physical interpretation won’t be important in our discussion, but you can basically think of it as the answer to this question:

Assuming an infinite structure (so whatever pattern the atoms take on just continues on forever) and approximating atoms as point charges (so any weird charge distribution is ignored), what’s the total effect from electrical forces on a single atom by all the others in the structure?

One important thing to note is that this value converges. In other words, if I start summing the effects of each atom individually and go outwards from the center by distance, the sum will tend towards a specific value (the Madelung constant). Since the effect of any single atom falls off (though not exponentially) as you increase the distance, this should make some intuitive sense.

Another interesting property of the constant is that it’s unitless—a pure maths number. In practice, it’s intended to be multiplied by some combination of the electric charge and atomic distance, but you can think of the constant itself as a fundamental property of a crystal’s structure, or even a fundamental maths constant. You’ll see why this is a good description soon.

For the crystal of NaCl (also known as salt), there are two Madelung constants—you get different values if you use sodium (Na) or chlorine (Cl) as the reference atom (otherwise, the constant will always be the same). Since the two types of atoms occupy positions in a pattern that maintains some level of symmetry if you start switching between the two, the effects of each are the same magnitude and differ only by a sign.

Here’s what it looks like. Notice how each layer forms its own “checkerboard.”

The NaCl crystal has a very simple pattern, which makes it an ideal example for this. It occupies a cube structure where sodium and chlorine atoms switch off as you move across adjacently. You can think of it like a checkerboard that extends infinitely, with Na placed where the white squares are and Cl on the black ones. Add another layer by placing another checkerboard on top of the one you already have, except shifted one space over. Keep adding layers, and pretty soon you’ll have the lattice we’re looking for.

To simplify things, before I show you the calculation, let’s set the charges of Na and Cl to be one fold and opposite to one another, so that the charge of Na is just 1 and the charge of Cl is -1. Let’s also set the distance between the reference atom and its nearest neighbors—the ones just adjacent to it on our checkerboard pattern (there are 6 in total)—as a distance of 1 away.

With all those assumptions, the formula for finding the Madelung constant of NaCl looks something like this:

It’s stuffed, but I’ll try to explain each part: M is the Madelung constant, and the subscripts represent that constant from either the reference atom Na or Cl (remember how the charges were reversed). The summation goes from negative infinity to infinity for (j,k,l), hitting every combination of the three along the way. You can think of (j,k,l) as the three coordinates describing a particular atom’s position in the lattice (this means (j,k,l) from negative to positive infinity will describe every possible atom). The origin (0,0,0) is our reference atom, (0,0,1) would be one of the 6 nearest neighbors, and so on (if you’ve done anything with 3d coordinates, it’s literally the exact same thing).

You might have also noticed that there’s a way to tell if the atom we’re looking at is sodium or chlorine just by looking at its coordinates: add them all together—if it’s an even number, it’ll be the same atom as the reference/origin, and odds are the other type. If you consider how the nature of this checkerboard pattern works in your head, it should start to be clear exactly why that works.

With that in mind, we can understand the numerator, which represents the electrical “effect”—It’ll work out to be positive if the atoms are the same and negative if they’re different. Lastly, the denominator is just the distance, with larger distances giving smaller effects.

So what happens when you actually do the sum? It depends on how you sum it, and this is where things get really interesting. There are two ways to do it that make the most intuitive sense, and I’ll describe them both here.

One way is to add them up like spheres of increasing radii. Add the 6 nearest neighbors, then the next closest set ((0,1,1) and the other atoms at distance \sqrt{2}), and so on. The sum would then be -\frac{6}{1} (the 6 nearest neighbors at distance 1) + \frac{12}{\sqrt{2}} (there are 12 next-closest atoms distance \sqrt{2} apart) – \frac{8}{\sqrt{3}} (the 8 “corners” of the first 3x3x3) + \frac{6}{2} (similar to the nearest neighbors but one out) and so on.

There are some really interesting properties to this summing pattern (OEIS #A005875):

  1. The number of atoms at each distance going outwards follows a peculiar sequence: 6, 12, 8, 6 (the first four already described), then 24, 24, 12, 30, 24, 24, 8, 24, 48, 6, 48, 36, 24, 24, 48, 24, 24, 30, 72, and so on…?
  2. It’s especially weird when you consider that the number of atoms at each distance is the same as the number of equidistant points from a cubic center, which seems like something pretty abstract/purely mathematical.
  3. This pattern is equivalent to the number of ways of writing a nonnegative integer n as a sum of 3 squares when 0 is allowed. For example, n=1 can be written as 1 or -1 squared in any of the three square places with the other two as zero, giving 6 unique ways (With some effort, you can figure out why that works to give you the right pattern).

And that’s already a fairly interesting result from seemingly unassuming beginnings.

The red line follows the resulting Madelung Constant if you sum it using the sphere method. Look at how unpredictable and strange the trend is (The blue line is the cube method, which I’ll describe soon).

But here’s the real kicker: doing the sum this way doesn’t actually get you the right constant. In fact, it won’t even get you a constant—it doesn’t converge. And I don’t mean that in the sense that it will tend towards infinity or negative infinity, which would be boring but somewhat understandable. It doesn’t converge in the sense that as you increase the distance, it just sums to random values around the actual Madelung constant that never seem to get any closer (though taking the average of those fluctuations over a long period can work, albeit slowly).

You might have already realized why that’s really weird: As you get further away, the distance increases, and the effect of any individual atom is lessened. This should really be lending itself to converging. You might have noticed something else, though: While distance increases as you get further away, meaning each individual atom has a lower effect, so does the amount of atoms.

There are just generally more atoms at further distances, a fact you can pick up on just by picturing the cubic lattice. Still, the value doesn’t even want to go towards either infinity, so this means that the distance and atom increases somehow “balance out” in the sum, creating a sort of self-regulating parity. This is even more surprising when you consider that every other atom impacts the origin in an opposite manner, which should add to the difficulty of a potential balancing act.

It also makes the simplicity of the next summing method surprising: Sum using expanding “cubes” instead of spheres, taking all the atoms in the 3x3x3 cube, then all the additional atoms you add in the 5x5x5 “shell” surrounding, then the 7x7x7 and so on, and it converges almost instantly. For NaCl, the value comes out to be about ±1.748 (depending on if you used cholrine or sodium as the reference).

As a side note, it converges even faster if you only take the “fraction” of each atom that’s in the current shell. In other words, “face” atoms are 1/2 inside or outside (and so you add only half the value until the next shell), atoms on the edge are either 1/4 or 3/4, and corners count for 1/8 or 7/8. I’ll probably post some code for this soon (edit: it’s posted).

I really do think this is amazing, and I may just be scratching the surface. If I could, I’d do my thesis on this (though apparently, someone else already did a fairly exhaustive analysis).

So what other weird and interesting properties of the Madelung constant can you come up with? Have at it, and tell me how it goes.

Navigation