New Paper on Riemann’s Ear Paper

A close friend recently forwarded an interesting paper to me by Andrew Bell, Bryn Davies, and Habibi Ammari, called Bernhard Riemann, the Ear, and an Atom of Consciousness. Though this paper was written only recently (July 2021), it references a partial manuscript composed by Riemann back around 1865-1866. According to the authors, Riemann’s work not only makes several original, accurate, and modern contributions to the science of human hearing, it also suggests a valid method of investigating human sense perception in general.

In this post, I’ll address four topics discussed by the authors: 1. the mind-body paradox, 2. Riemann against the materialists Helmholtz and Newton, 3. quantum mechanics and weak signals, and 4. Riemannian manifolds. My main goal here will be the following, besides to impart an appreciation of the Bell et al. (2021) paper and ask for more.

The authors assert that Riemann desired to apply his mathematical advances to the problem of how an ear translates external stimuli to objects of thought. I assert that they have it backwards. Riemann’s mathematical advances are a result with his conscious preoccupation with how the human mind makes sense of the world. His study of the ear was his attempt to both apply his work hitherto, as well as to shake the tree of knowledge for more fruitful mathematics.

Also, I’m writing this on the eve of my friend’s 70th birthday. On such a momentous occasion, let this be dedicated to his longevity and future contributions to human progress!

The Mind-Body Paradox

Maybe the term “Mind-Body Paradox” means different things for different armchair philosophers. Here, it refers to the problem of getting phenomena that seemingly occur outside the mind (“the body”) to register as an effect inside the human mind. This issue goes back to at least Plato, but it was also addressed by such figures as German scientist Gottfried Leibniz, and Riemann’s immediate philosophical predecessors Johann Herbart and Gustav Fechner. Note that Fechner was also an important influence on the early Gestalt Psychologists, Wolfgang Köhler, Max Wertheimer, and Kurt Koffka.

From Bell, et al. (2021):

For Riemann, the existence of a mind and its perception of underlying mathematical patterns and structure must underlie the scientific enterprise. The human mind is continuous with the physical universe, so the power of the mind to create thoughts and hypotheses comes before any causal power attributed to vibrating molecules in the air. For the scientist, the “problem of the organ”, as he put it when referring to the function of the cochlea, is to maintain continuity and provide a faithful interface between the sound wave and the apprehending mind, and the same logic applies to the submicroscopic motions of the middle ear.

Riemann was interested in this problem at least as early as 1853, around the time he penned his now famous “Philosophical Fragments”. Those fragments present a young Riemann curious about how Geistesmassen (thought-objects) enter into the mind in relation to physical changes in the perceptible universe. In this context, he describes several aspects of how sense perceptions must function to produce this transformation of outside to inside the mind. Besides describing a plausible hypothesis of how sight transforms viewed phenomena into thought-objects, he also goes rogue and describes a Fechnerian Earth Soul that perceives the composition of the atmosphere via the trees and plants as sense organs.

Another, up to now unpublished, example of Riemann’s interest in the human sense apparatus was his small study of the so-called Corpuscle of Vater. One page in Riemann’s manuscripts displays a strange diagram of concentric semicircles into which appear to enter refracting rays of light (see figure). The diagram and ensuing description may have been Riemann’s attempt to reproduce a page from some anatomy publication at the time. Here is a translation one of my old collaborators made of Riemann’s Fraktur handwriting:

Riemann’s sketch of the Corpuscle of Vater

Fig. 7. Diagram of half of a longitudinal cut through a corpuscle of Vater.

The elliptically curved lines are meant to denote:
aa the boundary of the outermost capsule
bb boundary inner and outer capsules
cc innermost capsule, sheath of the inner nucleus
dd length of the terminal fibril

The cross section of the inner nucleus is shaded darkest, then follows the inner capsule system, and the outermost is the brightest. The parallel lines e signify the direction of the compression waves, which arise from the outer skin surface and concentrate at the line dd the more darkly shaded have greater density and less elasticity than the brighter ones.

Today, the Corpuscle of Vater is recognized to be one of four types of mechanoreceptors in the skin of mammals, generally understood to register texture and pressure. It was the first sensory receptor every discovered by a biologist, D. Abraham Vater around 1717. Thus, here is Riemann, searching for biological organs present in humans, that mediate the relationship between the surrounding world and the mind. It’s interesting that the organ, and specifically Riemann’s sketch of it, resembles Riemann’s own diagrams of multiply connected surfaces.

Helmholtz and Newton are Lame

Bell et al. (2021) start their study with the relationship between Riemann and the pair of Hermann Helmholtz and Isaac Newton.

First, on Newton:

…[T]he core of the matter is set out on the first page of Riemann’s text: We do not – as Newton proposes – completely reject the use of analogy (the “poetry of hypothesis”). Newton’s well-known statement that we must keep to the facts and not deal in hypotheticals clearly antagonised Riemann who saw the human mind as the centre of everything: it is the essential starting point for framing notions of the self and its place in the world… Science cannot get started – it is lame, as Einstein once said in connection with religion – without taking [mind and soul] as its foundation.

We won’t go too far into the Newton issue here, since the point is pretty clearly made by Bell et al. (2021). However, it is my own studied opinion that Newton should be understood not as an individual person, but rather as a project designed by committee with the purpose of rendering science – continental European science as led by Leibniz, in particular – lame. The advances I’ve studied that are attributed to Newton’s genius (admittedly, not all of them) can always be traced to other visionaries like Kepler, Huyghens, Bernoulli, and Leibniz. It is notable that, after Leibniz presented the clear steps that led him to the discovery of the integral calculus, Newton (who chaired the committee that investigated who made the discovery first) refused to present his own. Perhaps he was telling an embarrassing truth when he said he does not make hypotheses.

Now, on Helmholtz:

Riemann’s view is that hearing should be viewed as top-down, not bottom-up, an arrangement in which top is the mind and bottom is matter. On this view, the mind is part of a manifold which reaches out through the ear and perceives vibrations, and the manifold includes all the psychophysical properties that Weber-Fechner law prescribes. Riemann thought something is missing if we take the view, as Helmholtz did, that vibrations in the ear create a causal cascade of mechanical motions, neural transduction, nerve propagation, and electrical activity in the brain.

So much for Helmholtz, who faithfully carried out a Newtonian analysis.

Weak Signals

This part was particularly fascinating for me, at least, the parts that I understood. Bell et al. (2021) invoke quantum mechanical arguments to extend Riemann’s investigation of how faint a sound the ear can perceive. Riemann describes “Nicholson’s report that the call of the Portsmouth sentry is clearly audible at night at a distance of 4 to 5 English miles, at Ride on the Isle of Wight”. He then calculates the intensity of the sound that reaches such a listener’s ear could easily be 1/10,000,000 the intensity of the sound from the source.

Bell et al. (2021) extend this with recent research, that demonstrates “the displacement of the eardrum is then truly microscopic (some… 10^-10 m), about the diameter of the hydrogen atom”. In order to bring in quantum mechanical effects, it is necessary that thermal motions within the auditory complex be reduced significantly. Bell et al. (2021) proceed to describe a type of feedback loop initially hypothesized by biophysicist Bialek, which effectively reduces the temperature of the system. Analogue experiments performed with lasers reduced the temperature of a suspended 1 gram weight to 0.007 K. The authors then proceed to describe a possible quantum of thought, the Psychon of Eccles (1990).

Leaving aside Psychons, about which I know nothing, Riemann’s work is compatible with another scientist that is typically not mentioned in relation to the senses: Johannes Kepler. I wrote an article a while back that compared Kepler’s World Harmonics, book 4, with radiation-influenced periodicities in living organisms. In book 4, Kepler describes how influential various configurations of the planets in the night sky are on organisms. This section of World Harmonics tends to be difficult for modern readers, because it seems to lend authority to astrological considerations. However, it is clear that astronomical configurations have some effect on the creatures and plants of the Earth. For example, the lunar cycle of seashore oysters, or the diurnal cycles of plants, or the annual cycle of influenza infection.

One major criticism of astrology (besides that the popular presentation is flim-flammery at its best) is that neither the gravitational nor the electromagnetic forces of other planets is strong enough to have any effect on the development of a human embryo, or on any other human life process. However, humans (and, in some ways, animals and plants) are certainly able to be moved by forces other than the physical.

An example will make this really clear: Back in 1938, one day before Halloween, Orson Welles got on the radio and began reading news reports about an alien invasion in the United States. The broadcast was taken by many people as real news, driving them to call the police to ask about the invasion, and some to run out into the streets in panic.

Did the electromagnetic force of the radio waves drive people into the street? Did the force of the sound waves from home radios compel people to pick up their phones and dial their local police offices? No, it was the content of the broadcast.

Similarly, I argue that Kepler and Riemann make a great case for humans, and possibly other organisms, to respond to exceedingly weak signals in ways that imply causation, due to the content of those signals. Where Kepler defines which configurations should be most influential, Riemann investigates exactly how those weak signals interact with the human mind.

Riemannian Manifolds

My final observation here regards Riemann’s manifolds. Because the entirety of Riemann’s ear paper is about the mechanism, and includes no advanced mathematics, Bell et al. (2021) extrapolate based on his previous mathematical achievements. In particular, they suggest that, had Riemann continued his investigations, he would have gotten into the mathematics of manifolds.

Here, they are at least partially on the right track, in my opinion. While writing his Philosophical Fragments, Riemann describes the thought-objects as having an “inner manifoldness”, the connections of which must be made in a way proportional to the sensed physical phenomena themselves.

Soon after scrawling out these notes (which may have been something like a pre-publication draft), Riemann was pressed by his mentor Karl Gauss to deliver his groundbreaking 1854 Habilitation Dissertation, on Hypotheses. Riemann described in his presentation the boundaries and general principles of how hypotheses must be ordered, in order to correspond to discovered geometric relations. At the end of his dissertation, he presented his intended program clearly for the perceptive audience member, when he hinted that such hypotheses must be bound by physical experiment. In other words, his true research interest was how the physical universe enters into the human mind, which process must inform how that physical universe is constructed.

So, it appears to me that Riemann didn’t intend to use his manifold concept to investigate sense perceptions, but rather the converse – he was using an investigation of the senses to discover concepts of higher mathematical constructs.

I only know of two, maybe three, other scientists who attempted, with varying degrees of success, to apply the same reasoning to other areas of physical science. Albert Einstein is the most public representation of Riemannian manifolds outside of pure mathematics that I know. However, I agree with Lyndon LaRouche that Einstein’s use is also the most basic. I’m no expert in General Relativity, but it seems Einstein’s requirements are met by the construction of a covariant tensor that is internally defined and provides for transformation of reference frames in a four-dimensional manifold. No need for Riemann’s transcendental transformation n -> n+1.

Russian academician Vladimir Vernadsky is another physical scientist who was led to Riemannian geometry for his work, this time in the domain of biogeochemistry. Vernadsky proved that biology functions according to a completely different geometry than abiotic processes. Some characteristics of living space, or the Biosphere, include intrinsic curvature, self-bounded growth, chirality, and 5-fold symmetry. Vernadsky also hypothesized that it took enormous amounts of energy to transfer matter from the abiotic to the Biosphere, and vice versa.

From this, he concluded that he required a mathematics that could describe at least two simultaneous geometric spaces (abiotic and biotic) that interact in special ways, each of which have distinct metric properties. Late in his life, academician Vernadsky proposed a third domain, the Noösphere, which encompasses human activity. Vernadsky worked with several Russian mathematicians to develop this idea, but his brainchild was never realized completely, possibly in part because of political opposition within the new Communist regime. For example, review the case of one of his collaborators, Nikolai Luzin.

The only other scientist I know who employed Riemannian manifolds outside of pure mathematics was economist Lyndon LaRouche. Again, the actual mathematics has yet to be fully elaborated, but the general considerations are as follows. LaRouche explicitly tied the human creative act, each of which initiates a new phase of economic growth, to Riemann’s concept of moving from a manifold of n dimensions to an “envelope” manifold of n+1 dimensions. Each phase of an economic system is characterized by the selection of n human discoveries actively employed as technology. When a new discovery is made, there is the emergence of a new economic system that contains the old (n), but where all activity is inherently changed a bit due to the existence of the new domain of action (+1).

Near the end of his own life, LaRouche also suggested that there is a higher, fourth Riemannian domain that effectively contains the lower three – Einstein’s spacetime, Vernadsky’s Biosphere, and LaRouche’s physical economy (which is coherent with Vernadsky’s Noösphere).

All three of these scientists appear to have gotten to Riemann in a way similar to Riemann himself – through physical experimental considerations. Perhaps, if more researchers treated the human mind as an existent, non-physical, but motive entity, as did these four scientists, we could break out of the scientific malaise of the past several decades.


Let’s recall that young Bernhard Riemann was highly influenced by attending the Mendelssohn-Dirichlet musical salons while in Göttingen. His teacher, Peter Lejeune Dirichlet, was married to Rebecca Mendelssohn, sister of composers Felix and Fanny Mendelssohn, and grand-daughter of the great philosopher Moses Mendelssohn. Rebecca and her husband would host parties at their home, centered around performances of Bach, Haydn, Mozart, Beethoven, and other great composers. Also in attendance were composers Clara Schumann, Joseph Joachim, and Johannes Brahms. No doubt, young Bernhard would enjoy the deep discussions of music and mathematics at these salons with Dirichlet and others in their orbit.

Perhaps Riemann’s interest in the mechanics of the ear stemmed from his experiences at these salons. For him, the act of human hearing was not simply a mechanical cascade that translates vibrations from one place to another. It had to be a system that provided for the transmission of the most beautiful and profound ideas, such as musical compositions, between human minds. Bell at al. (2021) capture this sentiment well in their paper. I hope their work is studied by others, and helps to revive a truly Riemannian mode of research.

If you got something out of this article, please Like it, Share it, Leave a comment, and Subscribe.

Riemann’s description of a dance he was learning, possibly at one of the Mendelssohn-Dirichlet salons.

How One New Element in California’s Math Education Reform Will Make Your Kid Stupid

Mathematics education in the United States has suffered a history of abuses, from the 1960s New Math debacle to the more recent Common Core math standards. While promising improved and more widespread math literacy among our nation’s children, each step has resulted in making math more difficult to understand, and further divorced from its genesis in human scientific discovery. Today, we are facing a new leap downwards, where math education is threatened with getting replaced by cultural reeducation exercises.

The California Department of Education (CADOE) is leading the herd with a new Mathematics Framework for California Public Schools, proposed in late 2020. Besides detracking students and dumping high school Calculus, the new Framework explicitly adds a new element to math reform — a focus on “White Supremacy Culture.” The stated intent of the new Framework is to narrow the achievement gap between underprivileged children and those who are more affluent. However, the real effect will be to reduce access to mathematics education for all children.

This report will summarize the worst parts of the Framework, present some opposing movements in the U.S., and delve into some of the background.

California tries to get the math out of math education

American public schools have practiced academic tracking for a long time. Tracking means, if a student shows aptitude in a certain subject, they are allowed to skip certain intermediates and take more advanced courses than their peers. Sometimes, this also works the other way, for when a student experiences difficulty in a certain subject, they may be forced to take more remedial courses than their classmates.

The Framework begins by addressing this situation, and cites research which supposedly shows1 students who are tracked into advanced mathematics classes tend to be either Asian-American or White children, while those that get held back tend to be Black or Latino. The Framework then states that mathematics education is therefore inherently biased. In order to combat this, the Framework advises a pro-active approach to encourage non-White (and, implicitly, non-Asian) kids to do better in math. However, instead of proposing ways to help struggling kids learn the material more effectively (as done, for example, by educator Jaime Escalante, portrayed in the movie Stand and Deliver), the Framework proposes to slow down the progress of advanced kids by encouraging discussions about social justice, and entertaining wrong, “culturally informed” answers.

The most controversial tenet of the new Framework is that it discourages kids from being able to take Calculus during High School. It advises, among other things, that the way to get underachieving kids a better edge on math (or, rather, to “make math more equitable”) is to prevent high achieving kids from taking more advanced classes. This means all students take the same math classes up to 10th grade, and they don’t get to Algebra until 9th grade. In many middle schools across the US, children who show aptitude for math can qualify to take their first Algebra class in 8th grade. This can then be followed by Geometry, Algebra II, Precalculus, and then Calculus in their senior year. Under the new Framework, students could only get to Calculus by taking an accelerated class in 11th grade, which combines a mangling of concepts from Algebra II and Precalculus.

One of the primary authors of the new Framework, and one of today’s leading voices in radical mathematics education reform, is Stanford professor Jo Boaler. Boaler, a British citizen educated in Psychology at Liverpool University, at first appears to have pretty good ideas about teaching math — math is inherently visual, collaboration is needed to rapidly learn math, all students have the ability to learn math, etc. — but behind the curtain, she is really pushing to replace math education with topics under the subject of data science, like how to upload and download data from the internet, how to clean data, and how to read statistical graphs. In particular, she has her guns out for Calculus. For example, during a working session to develop the initial draft of the Framework in August 2020, she stated, “The current pathways, particularly the push to Calculus, is [sic] deeply inequitable, and has served to keep out student of color, and girls, for generations now.”

Though the Framework was to be implemented by the end of 2021, an open letter published by the right-leaning Independent Institute prompted the CADOE to delay the Framework’s adoption until May 2022. The open letter, called Replace the Proposed New California Math Curriculum Framework, has been signed by over 1,200 math professionals who work in California, and begins as follows:

California is on the verge of politicizing K-12 math in a potentially disastrous way. Its proposed Mathematics Curriculum Framework is presented as a step toward social justice and racial equity, but its effect would be the opposite — to rob all Californians, especially the poorest and most vulnerable, who always suffer most when schools fail to teach their students. As textbooks and other teaching materials approved by the State would have to follow this Framework and since teachers are expected to use it as a guide, its potential to steal a promising future from our children is enormous.

The proposed Framework would, in effect, de-mathematize math. For all the rhetoric in this Framework about equity, social justice, environmental care and culturally appropriate pedagogy, there is no realistic hope for a more fair, just, equal and well-stewarded society if our schools uproot long-proven, reliable and highly effective math methods and instead try to build a mathless Brave New World on a foundation of unsound ideology. A real champion of equity and justice would want all California’s children to learn actual math — as in arithmetic, algebra, geometry, trigonometry and calculus — not an endless river of new pedagogical fads that effectively distort and displace actual math.

In early December 2021, a group of scientists and engineers published a second petition, called Open Letter on K-12 Mathematics, which condemns the national trend of dumbing down math education in general, and targets the California Framework in particular. The authors (Boaz Barak, Edith Cohen, Adrian Mims, and Jelani Nelson) make similar criticisms as the Independent Institute’s open letter. In addition, they point out that “[The Framework] may lead to a de facto privatization of advanced mathematics K-12 education and disproportionately harm students with fewer resources.” In other words, because the framework would not be binding, each California school district could decide how much of the guidance to follow. Districts that typically have worse math performance could adopt the new standards (perhaps to artificially boost apparent student performance, or reduce costs associated with advanced mathematics teachers), while more affluent districts — and private schools — could retain the current curriculum. In this way, kids in poorer districts would get less access to advanced mathematics than the rich kids. As of this writing, this second letter has over 1,700 signers from all over the United States.

Antiracist Mathematics — the Coronavirus Gamble

When kids were sent home to torture their parents with “remote learning” at the beginning of the COVID-19 lockdown, an opportunity presented itself to introduce new curricula appropriate for Zoom sessions. An organization called TODOS: Mathematics for ALL jumped at the chance and published a position paper called The Mo(ve)ment to Prioritize Antiracist Mathematics. To summarize the paper: We are currently in a sociopolitical revolution characterized by protests against White police who kill Black people, and we should capitalize on this situation by reforming math. We reform math by 1. redefining what “understanding math” means, and basing it on the weakest math students, 2. removing advanced math tracks, and 3. rewriting story problems to use conditions that afflict poor neighborhoods, like poverty wages and broken families. Note, there is nothing about helping kids develop their math knowledge more effectively, but rather the intent is to get all kids to move more slowly through the material, and infuse everything with a notion that so-called “White culture” is the problem.

The sentiment is captured by a quote referenced in the Mo(ve)ment paper, which comes from the awful book We Want to do More than Survive: Abolitionist Teaching and the Pursuit of Educational Freedom by Bettina Love: “To take it a step further, in this moment we must rethink what counts as valid mathematical knowledge… If we truly believe that we are moving towards assets-based views of students, we must expand our understanding of what it means to be good at mathematics, make space for alternative ways of knowing and doing mathematics based in the community, and acknowledge the brilliance, both in mathematics and beyond, of BIPOC [Black, indigenous, people of color] in our classrooms. We must be explicitly antiracist.” [emphasis added]

California’s Framework explicitly cites TODOS, and specifically their COVID/Anti-police protest paper, throughout.

The TODOS organization was originally created in collaboration with the National Council for Teachers of Mathematics (NCTM) to help Latino children do better in California math classes. While this goal is a noble one, it should be accomplished by helping children more effectively understand math, not by degrading academic expectations so weaker students can more easily achieve acceptable performance. NCTM itself has a history of being at the center of terrible math reform in the United States, including the “New Math” disaster that came out of the 1960s, and the math standards that continue to confuse kids in today’s Common Core curriculum. Not surprisingly, the NCTM also want to eliminate the teaching of Calculus from High School.

Following on the heels of the COVID lockdown moves by TODOS, another organization called Education Trust-West, with funding from the Bill and Melinda Gates Foundation, published A Pathway to Equitable Math Instruction. Instead of introducing a curriculum that teaches math better, the Pathway pushes the concept that Black and Latino students are unable to excel in math classes because they are inherently racist. Therefore, the Pathway proposes that math classes become race battlegrounds. The implementation of the Pathway’s propaganda in public math classrooms is examined in Chapter 9 of California’s Math Framework.

The bulk of the Pathway is designed to get the educator to use critical praxis2 and contemplate how to be antiracist while teaching math. This generally means allowing non-White students who are getting a problem wrong some latitude, since, according to the axioms of the Pathway, they may think differently than typically higher-performing White and Asian students.

The workbook is broken into 5 “strides” or chapters. The first stride contains very little about mathematics. Instead, it draws heavily from a white paper called White Supremacy Culture. The white paper describes fifteen behaviors that characterize a White supremacy environment, and how to combat each of them:

  • Perfectionism
  • Sense of Urgency
  • Defensiveness
  • Quantity Over Quality
  • Worship of the Written Word
  • Paternalism
  • Either/Or Thinking
  • Power Hoarding
  • Fear of Open Conflict
  • Individualism
  • Only One Right Way
  • Progress is Bigger, More
  • Objectivity
  • Right to Comfort

For example, a teacher could teach 2+2=4, and a student could challenge that, in his community, 2+2=5. If the teacher responds that 4 is the correct answer, and attempts to move past this simple fact, the student could retort that the teacher is a racist because he’s being paternalistic, hoarding power and imbuing a sense of urgency by moving on, and fears the open conflict of discussing the idea with the student. The teacher could then push back and be branded a right-wing culture warrior, or just forget about the rest of the period to discuss how he is racist and feels terrible about it.

The rest of stride 1 is dedicated to helping teachers recognize when they are expressing any of these characteristics while teaching. There is nothing about modifying the actual curriculum to teach math better. It is all about identifying and challenging “White supremacy”.

The original author of the White Supremacy Culture white paper, Tema Okun, is not, and never was, Black or underprivileged. She is an upper-middle class White leftist3 who barely graduated back in 1974 from Oberlin College — famous for its artistic and musical academics — with a degree in physical education. She currently runs an antiracism workshop, Dismantling Racism Works (dRWorks), that has grown to shocking popularity around the United States. Her workshops, or at least her white paper, are found in such places as administrator trainings in the New York City public school system, on the list of recommended resources for the National Education Association, and in anti-racism trainings at the Episcopal Diocese of Atlanta. Okun herself even gave a keynote speech at the huge data science conference JupyterCon 2020.

Matthew Yglesias wrote a useful article on Okun called Tema Okun’s “White Supremacy Culture” work is bad. There, Yglesias argues that Okun’s white paper is less about how to end racism, and more about how to dismantle successful organizations. American public school math classes are not exactly successful organizations, but following these recipes for supposedly dismantling racism, as the California Math Framework and its predecessors are doing, will certainly dismantle what’s left of math education.

Replace the reformers

Neighborhoods that are predominantly Black and Latino do face real obstacles in math education, as shown by their rare appearance in so-called STEM fields. But the solution is not to replace math education with not-math education. This would put these children into an even worse situation, and further erode what used to be the world’s top public education system. Math curricula in American public schools must not be the platform for cultural reeducation experiments.

However, this does not mean that American public mathematics curricula are doing just fine right now. Mathematics education in the USA is in a dismal state, where kids are graduating with limited to no real understanding of the foundations of human progress, regardless of their race or socioeconomic status. If you don’t believe this, simply ask any American kid, whatever color or background, to add up three fractions with unequal denominators. Then, watch them squirm. It’s not their fault they don’t understand fractions! This paralysis is built into the curriculum.

To truly reform mathematics in the United States really means we must revive the American Intellectual Tradition, which goes back to people like Cotton Mather, Benjamin Franklin, and Alexander Dallas Bache. To be specific, real pathways should be added for students to relive the scientific discoveries of the past.

For example, the capstone for a primary school Algebra curriculum should be a direct study of Carl Gauss’s first proof of the Fundamental Theorem of Algebra, which also introduces Gauss’s discovery of the complex domain. This Algebra curriculum could begin around third or fourth grade by posing the problem of how to double the volume of a cube. Many fourth-century BC philosophers in Plato’s Athens believed that the so-called cube root of two could only be approximated. Incidentally, this is exactly what is generally believed today, and exactly what your calculator does when you hammer the right keys. In contrast, Plato challenged his Academy to solve the problem exactly, without approximation. Archytas of Tarentum was the first to do this, and discovered that the confluence of three crucial surfaces — cylinder, torus, and cone — provides the crucial distance which allows an exact construction of the doubled cube.

A direct study of Archytas’s construction leads a student through all the rudiments of standard Pythagorean arithmetic and geometry that one might, with horror, recognize from math classes today – similar triangles, ratios, infinite series (e.g. arithmetic, geometric), basic trigonometry, powers, irrational numbers, and trigonometric/circular functions. By the time students actually reach the work of Gauss, possibly by seventh or eighth grade, it will be clear that the basis for mathematics is really physics (geometry). Neither irrational numbers, nor complex numbers, are imaginary or unattainable. Rather, they exist and can be discovered by the mind of Man, though they are outside of simple expression in a given system.

When students work through real pathways like this, they will become comfortable with most of the mathematical techniques taught in math classes today as a side effect. But they also get something extra, which is currently not offered in American public schools — they get to experience how human beings make real discoveries, something no other known form of life can do.

The American Intellectual Tradition is real, and represents the soul of our republic. True citizens of the United States should be concerned that this new attack on math education is simply an early strategy of a broader Mao-like crusade against learning. If the California Framework isn’t stopped, then it may be a tipping point for the rest of our school curricula.

If you got something out of this article, please Like it, Share it, Leave a comment, and Subscribe.


1: Stanford Professor of Mathematics Brian Conrad has demonstrated that many of the sources cited within the Framework arrive at conclusions opposed to those asserted by the Framework.

2: The term “Critical Praxis” originates with Brazilian theorist Paulo Freire. His book “Pedagogy of the Oppressed” currently sits as the second most cited work in educational research journals. Freire’s major accomplishment was to translate the Critical Theory of Nazi Martin Heidegger’s Frankfurt School into educational practices for use primary schools. Not just the term comes from Freire – the entire concept of replacing true academics with antiauthoritarian activism stems from his work. For Freire, “literacy” does not mean being able to read, but rather understanding that we are living under a system of authoritarian oppression which must be overthrown.

3: Though the roots are not traced explicitly here, Okun’s pedigree, and especially her teachings, come, through the Frankfurt School’s Herbert Marcuse, straight from Alexander “Helphand” Parvus and Leon Trotsky’s concept of “Permanent War/Permanent Revolution”. Paraphrased, in order to achieve a final overthrow of the oppressors, it is necessary to perpetually disrupt society’s organizations through revolt.

Do Earth Lavas and Lunar Lavas Flow Simultaneously?

Mineral Moon.  From

A former collaborator recently contacted me with an interesting possible correlation he’d found. I’ll describe his correlation by posing a few relevant questions:

  1. Are massive volcanic events on Earth periodic?

    The geologic record on Earth presents multiple episodes of concentrated emplacements of basaltic magma near and lava on its surface. These “Large Igneous Provinces” (LIPs) include the Deccan Traps in India, the Siberian Traps in Russia, and the Central Atlantic Magmatic Province spread across the continental coasts of the Atlantic Ocean. All three of these LIPs have been incriminated in a mass turnover in the Earth’s ecology – for example, the Siberian Traps lava flows occurred around the same time as the greatest extinction event since the Paleozoic, the Permian-Triassic cataclysm.

    There have been attempts to show that these episodes happen at roughly regular time intervals, but these attempts are accepted by only a small subset of the geologic community. Were these events truly periodic, it would represent something like the pulse of the Earth. What could possibly cause the Earth to pump massive amounts of basaltic magma towards its surface over and over, on a regular schedule?

  2. Does the Solar System’s motion through the Milky Way galaxy affect deep planetary processes? Or, anything else on the planets for that matter?

    It has been shown that our Solar System passes up and down through the plane of the galaxy while orbiting the galactic center, and that the time between crossings takes anywhere from 26 to 37 million years. Some statisticians have suggested that a hypothesized periodicity in mass ecologic transformation of ~26 Myr could coincide, and be caused by, this galactic period.

    My friend passed me one paper that suggested various other effects on the Earth due to these galactic passages. The author, Michael Rampino, attributed these effects to interaction of dark matter particles in the core of the Earth. Rampino posits that, were dark matter composed of the mooted Weakly Interacting Massive Particles (WIMPs), these WIMPs could be captured in the Earth’s gravity well, collide, and be mutually annihilated within the core, causing the release of potentially enormous amounts of heat (>= 1019 W). This amount of heat generated in the Earth’s core could raise the core’s temperature hundreds of degrees K within only a few thousand years.

    That amount of heat could certainly drive mantle plumes to the Earth’s surface, and generate emplacements of massive amounts of basalt.

  3. Are Lunar volcanic events connected with terrestrial LIP emplacements?

    Whether they are or not, Braden et al. (2014) catalogue emplacement dates of Lunar basaltic volcanism over the past 100 million years. Here is where my friend’s coincidence occurs.

    Lunar events:
    • 18 Myr (+/- 1 Myr) — “Sosigenes IMP,” covering 4.5 km2
    • 33 Myr (+/- 2 Myr) — “Ina,” covering 1.7 km2
    • 58 Myr (+/- 4 Myr) — “Cauchy-5 IMP,” covering 1.3 km2

    Earth LIP events:
    • 15.3–16.6 Myr — “Columbia River Flood Basalts”
    • 29.5–31 Myr — “Ethiopian and Yemen traps”
    • 54–57 Myr — “North Atlantic Tertiary Volc. Prov. 2”

    These seem pretty regular – the Lunar events occur about 3 million years after the corresponding Terrestrial events. Of course, for comparison, the Siberian Traps LIP, which is the largest basaltic emplacement event known on Earth, appears to only have lasted 2 million years.

  4. The big question

    My first reaction was that large-scale periodic processes are suspicious. My geology thesis advisor once mentioned that the two big “flashy” geologic paper topics are either periodic events or biggest things ever. Maybe that was what informed by knee jerk. It’s interesting that one source Rampino cites as critical to these kinds of periodicities, specifically periodicity in geomagnetic reversals, was my advisor’s husband Tim Lutz.

    But my friend’s question isn’t really about periodic processes. There is a deeper issue here.

    Statistical coincidence is not correlation, especially when the data set is small (e.g. three terrestrial-lunar coincidences). However, the human mind is wired to look for coincidences like this, because it thirsts for evidence of hidden causes. No true cause of physical phenomena can be perceived by the senses. The cause always must be inferred by witnessing sensible events that shouldn’t happen without the cause. Strange coincidences can be just the thing that betrays a hidden cause.

    For example, you are able to read this article because of a principle called light. You can’t see light. Light is a principle that generates the relationship between the observer and the observed that we call “seeing”. It took a lot of work by a lot of creative people (Huyghens, Fresnel, Planck, and Einstein to name a few) to clothe the principle of light with the appropriate geometry and mathematics, so we can understand how it works. You can see the geometric and mathematical descriptions of light, but you still can’t see light. Because the math and geometry can generate predictions about the effects of light, we know that light is a real principle that exists.

    The human mind is designed to hunt for, and understand, these hidden principles that cause sensible artifacts to exist.

    Rampino identified several apparently linked processes – passage of the solar system through the galactic plane, and the periodicity of emplacement of terrestrial LIPs, terrestrial extinctions, and impact cratering – and suggested a possible hidden cause: dark matter annihilation. My former collaborator identified another possible linked process: basalt flows on the Moon happen with the same frequency as those on the Earth. This certainly doesn’t prove that dark matter causes basaltic upwellings, but may indicate that the cause of these deep geologic process is located outside of either celestial body.

    Maybe, in this way, the planets are functioning as something like seismometers, recording the action of something unseen, which acts according to our stellar system’s distance from the galactic plane. Stone telescope, indeed!

The Cosmic Ray Threat: Is Our Sun Shutting Down?

The sun on May 29, 2018 (NASA/SDO)
A nearly blank solar face has become typical, as in this image taken on May 29, 2018 by the Solar Dynamics Observatory

It’s been raining cats and dogs!

The quietest spot in the solar system is on the dark side of the full moon. On these nights, the moon blocks out all the sunlight, as well as all the radio and other electromagnetic radiation from the Earth. It’s the most serene spot in the solar system to do astronomy. But, things have been getting louder there recently.

Earlier this year, Schwadron et al. reported on observations by the Lunar Reconnaissance Orbiter’s CRaTER instrument (abstract printed below). This instrument was specially designed to measure cosmic ray intensity when the Sun is shielded behind our moon. The report states that cosmic ray intensity is now the highest it has been since we started measuring cosmic rays, and it’s getting more intense faster than expected.

Cosmic rays are electrically charged atomic nuclei, hurtling through space at incredible speeds. There are two flavors of these little morsels. One type comes from the sun, and are called solar energetic particles, or SEPs. The other type are generally thought to have been blasted out of supernovae, and then accelerated around the Milky Way and other galaxies by intense intergalactic magnetic fields. These are called galactic cosmic rays, or GCRs.

Both types of speed demon are so small and so fast that a few may have shot through your body in the time it took to read this sentence. But sometimes, those cosmic rays hit other atoms. When they do, the effects can range from cosmic ray showers, to lightning, cloud formation, malfunctioning Toyotas, heart attacks, cancer, or even evolution.

Schwadron et al. showed that, if the hail of specifically galactic cosmic rays keeps intensifying at the rate it has over the past five years, it could become a dramatic risk to our astronauts.

The details

In 2009, the Lunar Reconnaissance Orbiter (LRO) launched with an instrument designed to study the cosmic ray environment around the moon. This instrument, called the Cosmic Ray Telescope for the Effects of Radiation (CRaTER), was outfitted specifically to model the cosmic ray effects on humans, both with and without shielding.

In late 2013 through 2014, a series of articles came out that detailed the initial findings. One of these papers, Does the worsening galactic cosmic radiation environment observed by CRaTER preclude future manned deep space exploration?, by Schwadron et al., definitively warned that it was looking pretty bad for the astronauts.

They combined cosmic ray measurements made by CRaTER with those made by the Advanced Composition Explorer (ACE) spacecraft to build a picture of the environment going back to about the year 2000. They then used a model to relate the GCR flux to the strength of the solar cycle, as indicated by sunspot number, going back to about 1950. Combining these two sets of data (observation plus model), they were then able to make a forecast about the GCR flux for the coming solar minimum, depending on just which minimum we end up with (more on this below).

They noted that, based on GCR flux measured by ACE during the previous solar minimum (~2009), male astronauts would have reached their recommended limit of GCR exposure within 400 days, and female astronauts would have reached theirs within 300. Based on forecasts of the next solar minimum (~2019), the CRaTER observations indicated exposure times would decrease by about 20%: about 320 days for male astronauts, and about 240 days for female. Given about six months to travel from Earth to Mars, and then six months back, a crew would easily exceed their GCR dose rates and enter dangerous territory.

Estimated GCR dose rates, from Schwadron et al. (2014)
This image, from Schwadron et al. (2014), shows dose rates as estimated from ACE (red) and CRaTer (green) measurements. These are compared with sunspot counts (black, bottom curve).

In the latest paper, Update on the worsening particle radiation environment observed by CRaTER and implications for future human deep-space exploration, Schwadron et al. revisit this prediction, since we’re almost in the middle of solar minimum. Their conclusion? The 2014 paper overestimated the friskiness of our sun, and underestimated the intensity of GCRs, by about 10%. In other words, astronauts will be able to spend even less time than expected in deep space, because the cosmic ray environment was getting worse faster than expected.

Measured and estimated GCR dose rates, from Schwadron et al. (2018)
This is the same chart as above, but with additional data from CRaTER (from Schwadron et al. 2018). Notice, the measured dose rates are higher than estimated.

Why was their prediction so far off?

What’s up with the Sun?

“That’s a very good and a very hard question,” said Nathan Schwadron, principal investigator for the CRaTER experiment. “I am not sure why the dose rates are going up so quickly. [But] I suspect two issues:

“1) The magnetic fields in the solar system are weakening more rapidly than we anticipated. This has the effect of allowing more radiation into the solar system.

“2) The drift of cosmic rays has changed dramatically due to a recent reversal in the dominant polarity of the magnetic field within the solar system. This is a natural solar cycle effect, but may be accentuated due to the weak strength in the magnetic field.”

[emphasis by pjm]

Every 11 years or so, the sun goes through a cycle. This cycle is observed through increasing and decreasing numbers of sunspots, magnetic field strength, and other forms of solar emanations. Humans have observed this cycle since about the 1600s, and these observations form one of the longer records of continuous human measurement. Right now, in mid-2018, we’re at the tail end of Cycle 24. Solar minimum is predicted to hit around 2020.

The flux of GCRs follows this cycle. During solar maximum, the GCR flux is low. During minimum, it’s high. When the charged GCRs pass through the Sun’s far flung magnetic field, they experience a torque. The net effect of this torque is that the GCRs don’t get very deep into the solar system before getting redirected back out again.

When the solar magnetic field is strong, during solar maximum, only a few, very high speed GCRs make it to the Earth. When the field is weak, during solar minimum, more GCRs can get to us, including lower energy ones.

At this point, it may appear that an increasing GCR flux is just a normal result of the approach to solar minimum, though the current minimum may be some kind of really deep minimum, and it’s approaching super fast. However, that is not the only story about the sun.

What’s down with the sun?

Back in 2011, the sun was nearing the top of its cycle, solar maximum. At that time, I filmed a pedagogical video on a prediction that was made by three sets of researchers. They forecasted that, based on observation and theory, the sun was going into a severe quiet period. I followed that video with a few additional pieces to expand on the concept.

William Livingston and Matthew Penn observed the strength of magnetic fields within sunspots from 1992 through 2009. It is well known that, in the vicinity of a strong magnetic field, spectral lines can split into multiple lines – this is the so-called Zeeman effect. This splitting in the Fe I 15,648.5 Å line can be used to estimate the strength of the magnetic field. Livingston et al. showed that the strongest magnetic field found in the smallest sunspots was about 1500 Gauss. They also showed that the average strength of magnetic field in the sunspots was trending down over time. They forecasted that, based on that trend, the next solar cycle may not produce a magnetic field stronger than 1500 G – in other words, the sun may not have enough oomph to produce sunspots during Cycle 25.

The McMath-Pierce Solar Telescope
The astronomers made these observations at the McMath-Pierce solar telescope, which bears an uncanny resemblance to the pyramids at Giza. (

McMath-Pierce Telescope Schematic
Light from the Sun is directed down a 150 meter optical tunnel by a mirror on top of the McMath-Pierce Solar Telescope. (Copyright © 1999 The Association of Universities for Research in Astronomy, Inc.)

Average sunspot magnetic field strength, Livingston and Penn (2009)
Average sunspot magnetic field strength, Livingston and Penn (2009)

A second group, the Global Oscillation Network Group (GONG) studies sound waves on the sun. Ripples of gas on the sun, observed as a wiggling Doppler effect on spectral lines, can be analyzed to reveal processes deep within the sun’s interior. A periodic feature the GONG group has identified using these helioseismic studies is called the Torsional Oscillation. This is a specific latitude band of gas inside the sun that rings with its own frequency. As the solar cycle progresses, these two bands start around the mid-latitudes (~55° N, ~55° S) and move toward the equator. Frank Hill et al. demonstrated that about halfway through the progression of this feature to the equator, a new torsional oscillation begins. The strength of these two bands can be a predictor of the strength of the next solar cycle. However, they also showed that the torsional oscillation had not yet started in 2010, well past the equivalent starting point during the previous cycle. Hill et al. concluded that Cycle 25 would be at least very late in starting, and possibly very weak.

The Torsional Oscillation
The Torsional Oscillation is represented by the red bands that vector towards the equator. The green vertical lines show equivalent points during Cycle 23 and Cycle 24. Notice that there is no red at about 50° N and S at 2011. (

The third indication came from observations of triply ionized iron spectra within the solar corona, by Richard Altrock of the Air Force Research Laboratory. I did an interview with Dr. Altrock back in 2011 on his observations, but here is the summary. Triply ionized iron is a good tracker of the sun’s corona.  Around solar maximum, features appear in the corona at high North and South latitude.  These features then progress quickly to the two poles up to solar maximum, and then disappear.  Thus, this “Rush to the Poles” is a good indication of the progression of a solar cycle.  As seen in Altrock’s diagram below, the Rush to the Poles for solar cycle 24 has barely started at a spot congruent to three previous cycles.  This suggested that the next cycle would, at the very least, start very late.


The sum total of these three sets of observations is that Cycle 25 will be late, weak, and possibly nonexistent.  In other words, the sun could be headed for a Grand Solar Minimum, something we have not witnessed since the late 17th Century. The Maunder Minimum was a period during which the sun sprouted virtually no sunspots. John Eddy, who named the period after E. W. Maunder, reexamined not only the history of sunspot observations, but also anecdotal evidence like stories of auroral activity. This event lasted some 70 years, and happened to coincide with an uncharacteristically cold period in Europe. Since then, the solar cycle has picked up and popped out sunspots every 11 years or so.

Grand Solar Minimum?

The CRaTER observations appear to support the Grand Minimum forecast. If this is the near future, what will it look like?

Every year or so, I check in on the sun. It has been very quiet lately. And I wonder, when will my kids next see a sun full of sunspots? Ten years? Seventy years? The measurements and analysis by the CRaTER team seem to suggest the latter.

However, it’s a different world now, and things have changed in the past seven years since the Grand Minimum forecast. The sun is a crafty beast, and it is not following the forecast. Please stay tuned for the next part in this series…


Over the last decade, the solar wind has exhibited low densities and magnetic field strengths, representing anomalous states that have never been observed during the space age. As discussed by Schwadron et al. (2014a), the cycle 23–24 solar activity led to the longest solar minimum in more than 80 years and continued into the “mini” solar maximum of cycle 24. During this weak activity, we observed galactic cosmic ray fluxes that exceeded the levels observed throughout the space age, and we observed small solar energetic particle events. Here, we provide an update to the Schwadron et al (2014a) observations from the Cosmic Ray Telescope for the Effects of Radiation (CRaTER) on the Lunar Reconnaissance Orbiter (LRO). The Schwadron et al. (2014a) study examined the evolution of the interplanetary magnetic field, and utilized a previously published study by Goelzer et al. (2013) projecting out the interplanetary magnetic field strength based on the evolution of sunspots as a proxy for the rate that the Sun releases coronal mass ejections (CMEs). This led to a projection of dose rates from galactic cosmic rays on the lunar surface, which suggested a ∼ 20% increase of dose rates from one solar minimum to the next, and indicated that the radiation environment in space may be a worsening factor important for consideration in future planning of human space exploration. We compare the predictions of Schwadron et al. (2014a) with the actual dose rates observed by CRaTER in the last 4 years. The observed dose rates exceed the predictions by ∼ 10%, showing that the radiation environment is worsening more rapidly than previously estimated. Much of this increase is attributable to relatively low-energy ions, which can be effectively shielded. Despite the continued paucity of solar activity, one of the hardest solar events in almost a decade occurred in Sept 2017 after more than a year of all-clear periods. These particle radiation conditions present important issues that must be carefully studied and accounted for in the planning and design of future missions (to the Moon, Mars, asteroids and beyond).

Don’t be lazy, dare to be Semantic! The Free Code Camp Tribute Project

Chingu Cohorts is a fantastic subgroup of Free Code Camp. They set up collaborative programming experiences among members of FCC on a roughly monthly schedule to help them develop their professional coding abilities. I have been taking part for the last two months or so, and have met great people, put together a solid project (/pengo), and generally learned a ton.

This round, I signed up for the FCC Speedrun. The goal is to complete all the FCC projects in five weeks. I may not acheive that goal, but set a few additional goals for myself.

  1. Create a standard “boilerplate” semantic webpage structure that will get improved each project.
  2. Learn to do unit testing, and test driven development in general.
  3. Come to love CSS.
  4. Develop a sustainable workflow that I can apply in a professional setting.

I just completed the first project – the Tribute Page. Two aspects I’ll discuss here are semantic design and using JavaScript for static webpages, i.e. “why use JavaScript for static webpages, Peter!?”


Semantics is a fancy name for the proper user of HTML in a webpage. Semantics means using the right tags for the right situation. It helps someone read the code and figure out what’s what, but also helps automated web crawlers identify specific pieces of information. HTML5 introduced many new tags, like <article> and <figure> that offer more descriptive markup. When designing webpages, I usually find I need tags that don’t exist. Tags like or – not the “ that shows up in the page tab, but a title splash area. But, it’s ok in these cases to just add either class or id attributes, or to just be creative. For example, my title splash zone is denoted as a special <section> above the main <article>.

  <h1>Adolf Seilacher</h1>
  <span>(March 15, 1925 - April 26, 2014)</span>
  <h2>Linked the deep ocean with prehistoric life</h2>
    It is said that old warriors never die, they just fade away.

At the bottom of the page, I put a standard footer that will appear (better and better!) in each project. Each is inside its own <div> and placed appropriately using specific class attributes.


  <div class="copyright">© 2017 Peter Martinson</div>
  <div class="github"><a href="">FCC : Tribute 1.0</a></div>
  <div class="license">MIT License</div>


footer .copyright {
  width: 32%;
  text-align: left;
  padding-left: 1%;

footer .github {
  width: 33%;
  text-align: center;

footer .license {
  width: 32%;
  text-align: right;
  padding-right: 1%;


I chose to put only the footer in index.html, but the rest of the page is injected from app.js.

var output = '';

output += '<section>';
output += '<h1>Adolf Seilacher</h1>';
var tag = document.getElementById("app");
tag.innerHTML = output;

Why? There are a couple reasons, besides the fact that I like doing things the hard way, to learn. First, reusability. All future pages can use the same basic index.html and CSS, but will need their own .js file. Ultimately, all these pages will be loaded dynamically into a portfolio page, and I think it will be useful to have them already JavaScripted. Second, proof of principle. I’m not using a framework because I want to force myself to get deep into JavaScript. I’ll use some jQuery, but want to try to limit that to AJAX stuff and whatever may get crazy across sundry browsers. Third, testing. Though there is no test here, I want to begin implementing unit tests, which means you need your pages to be slapped in with JavaScript.

That’s it. Go see the finished product in CodePen, and the source code at GitHub.

Don’t return the call – Callback instead!

Callbacks took me a while to understand, while they are a fundamental part of JavaScript, and especially Node.js. The concept finally clicked while working on a Slack slash command with the Chingu Penguins cohort, called Pengo. The key take away is that there are two categories of functions – those that use return, and those that use callbacks. A function that sends a HTTP request needs a callback, while one that does internal operations can simply use return.

When the Slack user invokes /pengo, one of several useful programming tips is recalled from a MongoDB database on mLab. The steps required are:
1. receive POST request from Slack
2. Request document from mLabs database
3. Receive quote response from the database
4. Format the quote in a JSON object
5. Send the JSON object back to Slack

The place callbacks clicked for me is step 3. pengo.js sends the request to a function, getQuote.atRandom(), which queries the database and serves the response back to pengo.js to play with. The problem is that it may take time for the database query to run, and the getQuote.atRandom() may complete before the query is finished.

My initial construction of getQuote.atRandom() was the following:

atRandom : function() {
  Quote.count({}, function(err, N) {
    if (err) callback(err);
    var id = Math.floor(Math.random() * N);
    Quote.find({ quote_id : id }, function(err, result) {
      if (err) return err;
      else return result;

Now, if pengo.js calls the function with var quote = getQuote.atRandom();, quote will always end up undefined. This is because the return statement is reached before the database query finishes its run. The solution here is to use a callback.

Callbacks are functions within other functions that fire off when the parent function has completed. JavaScript is designed to pause the parent function until a response returns after a request was sent. In other words, you dump the return statement and replace it with a callback.

The way I implemented this is as follows. First, replace the return with a callback. Note, you just call it callback:

atRandom: function(callback) {
  Quote.count({}, function(err, N) {
    if (err) callback(err);
    var id = Math.floor(Math.random() * N);
    Quote.find({ quote_id : id }, function(err, result) {
      if (err) callback(err);
      else callback(null, result, id);

Notice a few things. The callback accepts multiple parameters, but the first is designated for any error conditions. If there’s no error, set the first parameter to null. The savvy reader will also notice that, while my function implements a callback, it also invokes two callbacks, because there are two calls to the database.

Second, do the business in pengo.js within the callback function:

getQuote.atRandom(function(err, quote) {
  if (err) console.error(err);
  // 'quote' is now the response object
  // do with it what you will!
  var data = quote.text;

The guts of this statement is within the function, or callback, which waits until the HTTP request has completed before running.

It’s a slight difference in how to write a function (using callback instead of return), but the returns are great.

Earth Atmosphere, on the Moon!

Terada et al. have demonstrated that oxygen from the Earth can be transported to the Moon’s surface. The core of their study reports the observation of high-speed (1-10 keV) oxygen ions, O+, by Japan’s Kaguya (SELENE) lunar orbiter. These high-speed O+ ions are only observed when 1) the Earth is between the Sun and the Moon, and 2) Kaguya is between the Earth and the Moon. This zone is where the Earth’s magnetic field excludes the Sun’s solar wind and channels the ions that have left the Earth. Terada et al. show that the composition of this terrestrial stream of oxygen is composed of 16O poor oxygen, similar to the isotopic weight of atmospheric ozone. It also matches a hitherto mysterious oxygen signature found in several lunar samples returned by the Apollo missions.

From Terada et al.:

A consequence of this finding is that the entire lunar surface can be contaminated with biogenic terrestrial oxygen, which has been produced by photosynthesis over a few billion years (with an estimate of 4×1036 O+ ions for about 2.4 billion years after the Great Oxygenation Event).

The implications are fascinating. Photosynthesis appears to have begun 2.4-2.7 billion years ago, and created the massive oxygen instability in our atmosphere (~20% O). Since that time, the Earth has been puffing this oxygen into nearby interplanetary space, where a good amount could get sucked up by the Moon’s surface. Over time, that sequestered oxygen would get buried by weathered lunar powder, thus creating an incredibly stable geologic (selenologic?) record of the Earth’s changing atmosphere. Whether that record could actually be teased out is debatable (and is questioned by Terada et al.), but perhaps deep core samples could provide a clear signal. In general, this work is another reminder that life on Earth has really been life in the Solar System. Sending people back to the Moon to study its rocks is a clearly indispensable step in understanding life’s interaction with both its home planet and its solar environment.


For five days of each lunar orbit, the Moon is shielded from solar wind bombardment by the Earth’s magnetosphere, which is filled with terrestrial ions. Although the possibility of the presence of terrestrial nitrogen and noble gases in lunar soil has been discussed based on their isotopic composition , complicated oxygen isotope fractionation in lunar metal (particularly the provenance of a 16O-poor component) re­mains an enigma . Here, we report observations from the Japanese spacecraft Kaguya of significant numbers of 1–10 keV O+ ions, seen only when the Moon was in the Earth’s plasma sheet. Considering the penetration depth into metal of O+ ions with such energy, and the 16O-poor mass-independent fractionation of the Earth’s upper atmosphere , we conclude that biogenic terrestrial oxygen has been transported to the Moon by the Earth wind (at least 2.6 × 104 ions cm−2 s−1) and implanted into the surface of the lunar regolith, at around tens of nanometres in depth. We suggest the possibility that the Earth’s atmosphere of billions of years ago may be preserved on the present-day lunar surface.

Biogenic oxygen from Earth transported to the Moon by a wind of magnetospheric ions
Kentaro Terada, Shoichiro Yokota, Yoshifumi Saito, Naritoshi Kitamura, Kazushi Asamura, Masaki N. Nishino
Nature Astronomy 1, Article number: 0026 (2017)

The First Law of Galactic Rotation

Nota Bene:  One of the authors, Stacy McGaugh, pointed out to me that he considers the principle described below as the Third Law of Galactic Rotation.  If you would like to know why, please go read his fascinating article on the law, as well as his articles on the other two laws.

The online magazine Quanta recently published a hot button article called “The Case Against Dark Matter”. The gist of the article is that a flurry of papers and lectures were presented over the last months of 2016 that call into question the existence of dark matter. One of the papers stood out to me, so I’ll review it here. It is called Radial Acceleration Relation in Rotationally Supported Galaxies, composed by Stacy McGaugh, Federico Lelli, and James Schombert. The abstract is reprinted below, after I summarize what I think is understated by the word extraordinary in the paper.  To be clear, for me, this appears to be the kind of discovery that will usher in a new view of the universe, as the Michelson-Moreley results motivated a similar paradigm shift in the early 20th Century.

The relationship

The authors compiled two sets of observations into one database called SPARC – Spitzer Photometry and Accurate Rotation Curves. One set, at wavelength 3.6 μm, was obtained from observations with the Spitzer Space Telescope. The other set, at wavelength 21 cm, was compiled from decades of observations with arrays of radio telescopes like New Mexico’s Very Large Array. The 3.6 μm observations see stars, and are used to measure the amount of stellar mass in galaxies. The 21 cm observations see dark dust and gas in galaxies due to a spectral line in neutral hydrogen, and is used to create rotation curves for the galaxies.

The rotation curves represent the original problem Dark Matter was invented to solve. According to Kepler’s laws of planetary motion, as modified by Newton’s law of gravitational force, the speed of an orbiting object depends on how much mass is within the orbit. More mass = faster object. For rough point masses, like the objects in our Solar System, the orbits are determined by how far they are from the central star. In a galaxy, things get more complicated because the mass is distributed. As you go further away from the center, more and more stars and other matter end up within your orbit, thus increasing your orbital speed. This increase of mass with distance is tempered by the fact that the amount of stars and gas gets less as you approach the edge of the galaxy. Therefore, the speed of orbiting objects should drop off as you leave the galaxy.

Galactic rotation curves from McGough et al. (2016)
Rotation curves of two types of galaxy from McGaugh et al. (2016: Fig. 2). The black dots with error bars represent the observed orbital velocity at increased radial distance from the galactic center. The other curves show what the orbital velocity should be, due to various masses in the galaxy: dotted = gas, dashed = stars, dot-dash = galactic bulge. The solid blue line is the total orbital velocity expected due to all observed masses put together. In other words, far away from the galactic center, objects are moving much faster than they should be due to observed matter.

Exactly the opposite is found. The 21 cm observations have shown that the orbital velocity of gas usually speeds up as you leave the galaxy, thus implying that the amount of matter, in fact, goes up as you leave the galaxy! But, to date, nobody has actually observed the mass. This missing mass got the apt name Dark Matter in the 1930s by astronomer Fritz Zwicky. Over 80% of matter, based on these galactic rotation curves as well as other observational evidence, is this dark matter. Other theories that don’t invoke dark matter have been invented – such as the Modified Newtonian Dynamics of Mordehai Milgrom – but none has had the success enjoyed by dark matter.

In the future, the work by McGaugh et al. might be seen as the silver bullet that killed dark matter. Using both sets of observations, they calculate the centripetal acceleration exerted on the objects in over 150 galaxies. Centripetal acceleration can be calculated in two ways: one way requires knowledge of orbital velocity, the other way requires knowledge of mass. McGaugh et al. get the mass with the 3.6 μm observations, which can be converted from luminosity to mass quite directly. This gives them g_{bar}, the centripetal acceleration due to observed mass. Then they get the orbital velocity from the 21 cm observations, via the rotation curves. This gives them g_{obs}, the centripetal acceleration observed to exist. Then, these are plotted against each other:

Centripetal acceleration due to observed matter versus centripetal acceleration observed. Plots from McGough et al. (2016)
Centripetal acceleration due to observed matter versus centripetal acceleration observed. Plots from McGough et al. (2016)

The relationship is absolutely extraordinary! It’s not directly proportional, but follows a slight curve toward the center of the galaxy. What is remarkable is that the exact same relation holds for all galaxies in the study, and is not dependent on galactic type. They map the relationship to a relatively simple equation

g_{obs} = \mathcal{F}(g_{bar}) = \frac{g_{bar}}{1-e^{\sqrt{g_{bar}/{g_{\dagger}}}}}

which only requires one parameter for the fit, g_{\dagger}.

Why is this extraordinary? It does NOT mean that observed mass causes what we see in the rotation curve – there is still something missing. However, it shows that there is a simple, absolute, relationship between the rotation curve and the observed mass. In other words, in a given galaxy, using this new law, if you are given the distribution of observed, not dark, matter, you can produce the galaxy’s rotation curve. There is absolutely no need to invoke invisible mass.

The First Law

Kepler developed his laws after hundreds of hours of observational, computational, and hypothetical work. With these laws, the orbits of any object in the solar system can be completely determined (disregarding important small deviations). There is no concept of gravitational force in Kepler’s laws, only geometry, time, and motion. The laws hold for every stellar system.

McGaugh et al. have found a new law that applies to galaxies. And, they are not blind to this law’s significance. In a more extensive paper published in the Astrophysical Journal, Lelli et al. state

The radial acceleration relation describes the local link between baryons and dynamics in galaxies, encompassing and generalizing several well-known galaxy scaling laws. This is tantamount to a Natural Law: a sort of Kepler law for galactic systems. A tight coupling between baryons and DM [dark matter] is difficult to understand within the standard ΛCDM cosmology. Our results may point to the need for a revision of the current DM paradigm.

One difference between Kepler’s laws and this new galactic law, is that Kepler believed he had found the causes of his laws. Later on, his causes were tossed out by the new theory of Newtonian gravitation. McGaugh et al. have not yet settled on a cause for their newfound relationship. However, they do hint at some possibilities:

Possible interpretations for the radial acceleration relation fall into three broad categories: (1) it represents the end product of galaxy formation; (2) it represents new dark sector physics that leads to the observed coupling; (3) it is the result of new dynamical laws rather than dark matter. None of these options are entirely satisfactory.

I’ll conclude by drawing another parallel. Case Western University, home of two of this paper’s authors, was host to another critical observational result. In 1887, Albert Michelson and Edward Morely performed an interferometry experiment there to measure the Earth’s velocity through the ether. They famously returned a “null” result. Almost 30 years later, Albert Einstein shocked the world by declaring, based on Michelson and Morely’s result, that the ether – a fundamental substance of physics since the Enlightenment – does not exist.  Never again could theory or experiment rely on belief in the existence of the ether.  Case Western may, again, be the source of a critical result that will provide reason to both discard a fictional substance and provoke a new paradigm shift in our view of the cosmos.

But, maybe not.  If this discovery doesn’t kill dark matter, then dark matter’s place in the universe is about to become much stronger.  Either way, we now have a new, first law of galactic rotation.


We report a correlation between the radial acceleration traced by rotation curves and that predicted by the observed distribution of baryons. The same relation is followed by 2693 points in 153 galaxies with very different morphologies, masses, sizes, and gas fractions. The correlation persists even when dark matter dominates. Consequently, the dark matter contribution is fully specified by that of the baryons. The observed scatter is small and largely dominated by observational uncertainties. This radial acceleration relation is tantamount to a natural law for rotating galaxies.

S. S. McGaugh, F. Lelli, and J. M. Schombert, Physical Review Letters 117, 201101 (2016).

Echoes of Ancient Cataclysm Heard Through FOSSILS

To paraphrase Jon Stewart, the United States after the presidential election is the same as it was before.  However, the Earth is certainly not the same since it was showered by the flotsam of numerous supernova explosions around 2 million years ago.  In this post, which is a followup to this one, we review a paper which purports to present evidence of those long ago supernovae as recorded in microfossils at the bottom of the ocean.  The paper’s abstract is at the end of this blog post.

Single domain magnet chain within a magnetotactic bacteria. []
Single domain magnet chain within a magnetotactic bacteria. []
First, what are microfossils?  In the present case, they are the remains of bacteria which populate the ocean floor.  Usually, the remains left behind by bacteria are anomalous chemical signals, such as the banded iron formations made by photosynthesizing bacteria.  Since bacteria are single celled creatures, they rarely leave fossils of their actual body parts (organelles).  However, some bacteria do produce hard internal structures that can get left behind.  One incredible type of bacteria does leave actual fossils are called Magnetotactic Bacteria (MTB).  These bacteria ingest iron from ocean water and manufacture tiny magnets within their single-celled bodies, which they can then use to orient to the Earth’s magnetic field.  When the bacteria die, these little magnets, called magnetosomes, remain as microscopic chains of magnetite.  Unambiguous fossils of these little guys can be found going back to almost a billion years.

Ludwig, et al., perform analyses on MTB and other iron microfossils from two oceanic drill cores from the Pacific Ocean.  Most of the iron within the microfossils is made up of typical 58Fe, but the analysis presented here demonstrates a spike in 60Fe within these magnets around 2-2.5 Million years ago (Ma).  As stated before, 60Fe is most likely produced by supernova explosions, but it can also be delivered to the Earth by meteorites.  Ludwig, et al., isolate supernova-specific 60Fe via a careful chemical leaching technique which draws out only secondary iron oxides and microscopic grains, thus leaving any potential micrometeorite particles in the discarded residue.  The leached material was then analyzed by accelerator mass spectrometry.  They found a clear spike in the ratio 60Fe/Fe around 1.8-2.6 Ma in both samples, and attributed the spike to the debris of multiple supernova explosions.

As noted previously, the supernovae were probably associated with the formation of the so-called Local Bubble.  According to the authors:

The Local Bubble is a low-density cavity ~150 parsecs (pc; 1 pc = 3.09×1016 m) in diameter, within the interstellar medium of our galactic arm, in which the solar system presently finds itself.  It has been carved out by a succession of ~20 [supernovae] over the course of the last ~ 10 Ma, likely having originated from progenitors in the Scorpius-Centaurus OB star assocation, a gravitationally unbound cluster of stars ~50 pc in radius.

A future Stone Telescope post will discuss the possible relationship between the Local Bubble supernovae and the Pliocene-Pleistocene geologic boundary, but for now let’s consider two aspects of this story:  1) the use of geology for historical astronomy, and 2) the plight of the magnetotactic bacteria.

Geology as a temporal telescope

Typical presentations of astronomy compare looking through a telescope to traveling in a time machine.  Lightspeed is finite, and the closest star to our solar system is a few light years away.  Just like Han Solo’s 12 parsec Kessell Run is strange, since a parsec is a unit of distance and not a unit of time, a light year is a unit of distance, not time – it is how far an object would travel in one year if traveling at the speed of light.  Say something happens 100 light years (ly) away from the Earth.  The absolute soonest that we would know anything about that is 100 years after it happened, when its radiation finally reaches our planet.  Therefore, looking through a telescope, we see objects as they were long ago.

However, when you see an object through a telescope, you are not looking at a stop-motion picture.  What you see is changing.  For example, Johannes Kepler saw a supernova in 1609.  Astronomers have located the remnant of this supernova, which is now a cloud of plasma much larger than the original star.  It’s about 13,000 ly away, which means the actual supernova occurred about 13,407 years ago.  In other words, when we look at the remnant today, we are seeing it as it was 407 years after Kepler observed the explosion.  To see what Kepler saw, we need to read his famous book on the subject.  There is no other physical evidence to see through the telescope.

In geology, the physical evidence is still there!  We can, in a sense, pick chunks of that astronomical event up off of our planet’s crust.  Our planet is a net that captures the stuff of cosmic phenomena, and preserves it for future scientists to study.  In the present case, those little magnetotactic bacteria caught pieces of supernovae and incorporated them into their tiny bodies, which are preserved to this day for us to find.

Those little bacteria

But, what do the bacteria care?  They were just huffing up iron ions they found in the sludge at the bottom of the ocean.  Could they tell the difference between the usual 58Fe and that rare delicacy 60Fe?  Maybe they couldn’t, but maybe they could.

Vladimir Vernadsky famously emphasized that different organisms are characterized by different atomic weights of specific elements within their bodies.  The calcium in a horse would have a different atomic weight than the calcium in a mushroom, which means a different ratio of calcium isotopes.  Vernadsky believed that organisms sought out and selected specific isotopes of elements with which to build their bodies.  That as just a brief indication, maybe the magnetotactic bacteria could tell the difference between the usual fare and the exotic 60Fe.

Maybe the 60Fe made slightly better magnets?  Biological functions have been shown to respond slightly due to isotope variations, for example in ATP synthesis.  If there was some type of advantage in taking in 60Fe, perhaps this lent an evolutionary advantage to those bacteria that could tell the difference?

But, even if they couldn’t tell the difference, they were organisms that tasted of the supernovae.  Perhaps other organisms felt the effect of those supernovae as well.  But that’s an investigation for next time.


Massive stars (M≳10 M⊙), which terminate their evolution as core-collapse supernovae, are theoretically predicted to eject >10−5M⊙ of the radioisotope 60Fe (half-life 2.61 Ma). If such an event occurs sufficiently close to our solar system, traces of the supernova debris could be deposited on Earth. Herein, we report a time-resolved 60Fe signal residing, at least partially, in a biogenic reservoir. Using accelerator mass spectrometry, this signal was found through the direct detection of live 60Fe atoms contained within secondary iron oxides, among which are magnetofossils, the fossilized chains of magnetite crystals produced by magnetotactic bacteria. The magnetofossils were chemically extracted from two Pacific Ocean sediment drill cores. Our results show that the 60Fe signal onset occurs around 2.6 Ma to 2.8 Ma, near the lower Pleistocene boundary, terminates around 1.7 Ma, and peaks at about 2.2 Ma.

Echoes of Ancient Cataclysm Heard Through Ocean Rock

Echoes of Ancient Cataclysm Heard Through Ocean Rock

Although I should be joining the angst against our newly elected president, this paper fell into my lap. It precisely represents the purpose of my blog, and so reviewing it is probably a better way to spend my time than yelling at idiots on facebook.

A neat paper just went up on The Link Between the Local Bubble and Radioisotopic Signatures on Earth. It came to my attention via the Astrobiology Web, where they published the abstract. I’ll throw up the abstract below, after a quick summary of the paper and its predecessor.

Quick Summary

The iron isotope 60Fe is quite rare in nature. It’s only created during cataclysmic events – yes, more cataclysmic than the 2016 presidential election. Think supernovae, or worse. It has a half-life of about 2.6 million years (Myr), which means that none of the 60Fe generated 4-5 Million years ago (Ma) during the creation of our solar system is left. We know it existed, though, because its echo is preserved in its daughter isotope, 60Ni. This isotope of nickel is also quite rare, is found in super old Earth crust, and is created only as the decay product of 60Fe. Thus, any 60Fe we find in the crust today must have salted the Earth only a few million years ago.

An earlier paper by Wallner, et al., Recent near-Earth supernovae probed by global deposition of interstellar radioactive 60Fe, presented mass spectrometer analysis of eight portions of ancient ocean crust scattered around the world, each containing tiny amounts of <sup>60</sup>Fe. This 60Fe was most likely produced by supernova explosions within the last 10 Myr or so. It just so happens that astronomers have found possible evidence of these explosions, in the bodies of stars and matter within the Scorpius Centaurus Association, which stretches from Antares in Scorpius all the way down to include the Southern Cross. These supernovae likely inflated a cavity within the local Orion Spur, a dense part of the Milky Way’s interstellar medium. In fact, evidence of this cavity has been explored for the past few decades, and it has the name Local Bubble – our solar system has been traveling inside this cavity for at least the past 10 Myr. Hence, the 60Fe found by Wallner, et al., sounds like an echo of this supernova chorus that sang out so near our planet a few million years ago.

The authors go further in the paper: Feige, et al.. Here, several stars from the Scorpius Centaurus Association are identified as potential precursors to the Local Bubble supernovae. Tracing the stellar trajectories back in time, Feige et al. create a synthetic history of the Local Bubble to model what Earth’s environment may have encountered. They find that, according to the model, the best fit to the 60Fe signal found in the Earth’s crust occurs at about the time our solar system passed through the shell of the expanding bubble, 2-3 Ma. They note that the oceanic drill cores also indicate a supernova signal around 6.5-8.7 Ma, which they will attempt to model in the future.

It should be noted that Wallner, et al., are not the first to identify live 60Fe in the Earth’s rock. Other researchers have found such deposits in the single domain magnet chain fossils of magnetotactic bacteria, again, in the same rough date range (~3 Ma). The transition from Pliocene to Pleistocene also occurred around this time, and may have had some direct causal relationship with the supernovae that peppered the Earth with 60Fe. Maybe we’ll investigate some of these loose ends in future posts!

Feige et al.
The Local Bubble about 2.2 million years ago, according to simulations by Feige, et al. Our solar system lives at (0, 0) on the graph, right at the edge of the expanding bubble.


Traces of 2-3 Myr old 60Fe were recently discovered in a manganese crust and in lunar samples. We have found that this signal is extended in time and is present in globally distributed deep-sea archives. A second 6.5-8.7 Myr old signature was revealed in a manganese crust. The existence of the Local Bubble hints to a recent nearby supernova-activity starting 13 Myr ago. With analytical and numerical models generating the Local Bubble, we explain the younger 60Fe-signature and thus link the evolution of the solar neighborhood to terrestrial anomalies.