Each year, the world of science continues to march onward, making massive strides and small steps alike. With the many scientific fields that exist, it can be incredibly difficult to identify which stories are most important in a roundup of scientific content in a year. Because of this, some stories can be buried altogether. So to start off a fresh new year, here are four breakthroughs in science and mathematics that may not have been given the front page, but that are incredibly important nonetheless.
1. Redefining the kilogram
Unless you hail from south of the border, you probably see the fundamental unit of mass all over the place: the kilogram. But have you ever wondered where exactly the kilogram’s definition of mass comes from? How do we know that the mass that “1 kg” represents actually means something, and isn’t just arbitrary? Well, up until this year, it was an arbitrary definition. Since 1889, the notion of a kilogram wasn’t defined by some kind of fancy physics concept, but instead by a hunk of platinum-iridium alloy in France that set the standard reference point for what “1 kg” was. This year, joining the sweeping range of changes involving the redefinitions of measurements in terms of core physics properties, the kilogram was officially standardized to be defined in terms of the Planck constant, a fundamental constant of quantum mechanics.
2. AI still isn’t as smart as us
These days, we hear a lot of buzz around the futuristic (or, in some cases, infamously cyberpunk) technologies that developments in AI are bringing us, from self-driving vehicles to accurate prediction systems. If you’re more directly connected to the computer science world, the buzz around these technologies more specifically narrows down to “deep learning,” a kind of renaissance in machine learning and statistical learning that uses “deep neural networks” to do everything from learning classifications to generating art and music, all through the use of complicated multi-parameter functions chained together. One central bedrock of deep learning work is object detection: a trained network being able to learn the objects in an image or space and then recognize them correctly in new images or different spaces. For the past decade, work with complicated deep network architectures has brought object detection to nearly perfect accuracy—that is, until this year. A simple set of experiments found that while these networks were getting incredibly good at detecting the same objects in increasingly altered scenarios, they failed in different contexts. Dubbed the “elephant in the room,” it takes the simple, intuitive idea that a network that can easily detect everything in your living room will immediately get confused and fail in accurate detection when superimposing images of elephants in the room, even though we could still recognize the objects in our rooms no matter what unexpected animal found itself at home there. There’s still quite a long way to go for our computers to catch up with our complex analytical capability.
3. The hidden ninth planet (no, not Pluto)
Sorry, Pluto! We mean an actual planet—a fairly massive one, at that, one lying at the edge of our solar system that astronomers have been finding increasing evidence for. The notion of a fairly large planet lying beyond the orbit of Pluto but within the confines of our local space of the galaxy is something of a common fictional trope, but the discovery of such a planet would shake our cultural conceptions far more than Pluto’s de-listing did. So far, the evidence of such a planet has rested upon the increasingly erratic and irregular orbits of various small rocky and icy bodies lying in the far reaches of the solar system, but it wasn’t until this year that astronomers detected what may be a dwarf planet with an orbit so elliptical that it points strongly to the presence of this planet somewhere at the edges. If not, solving the particular mystery of why these orbits are so strange would add another bright step to our understanding of a constantly chaotic universe.
4. String theory isn’t compatible with our universe, or vice versa
A deep and controversial conjecture put out by one of the world’s leading string theorists has left the theoretical physics community in argumentative flutters (that is, the physicists who believe string theory is our current best hope of a general unified theory). The gist of the conjecture states that universes that are logically consistent with themselves (that is, their laws of physics are a consistent set of laws, and are not self-contradictory) have a particular set of properties as estimated by string theory of which, the main one involves the density of energy in the emptiness of space (“dark energy”) decreasing more quickly than a particular rate as the universe expands. The problem is that our universe is doing the opposite, which is a contradiction to the conjecture, despite the fact that it currently holds true for all experimentally modeled simple universes under string theory. The reasonable implication here is that perhaps the conjecture is too simple and needs to be more complex to deal with a complex existing universe, and this is a conception that reaches the philosophically heated physics arguments regarding the pursuit of “simplistic beauty” leading one astray from a potentially complicated and messy truth. But this is allegedly a fundamental conjecture, and so either path on the road seems to hold harrowing conclusions: either, our theoretical conception of the cosmos has something fundamentally missing because we are still just barely beginning to understand it and we might need to radically alter our entire picture of how the universe works; or, something is wrong with our universe itself. (Though it is most likely the former, this issue may split the string theory community.)
Comments are closed.