Can carbon capture and storage solve climate change?

Ronan Dubois investigates carbon capture and its potential to revolutionise the fuel industry

The 2016 Paris climate accord fixed a threshold of450 ppm (parts per million) for the atmospheric CO2 concentration to limit the global temperature rise to 2°C by the end of the century. The International Energy Agency (IEA) has forecasted that reaching these targets will be 140% more expensive without carbon capture and storage (CCS). So, what exactly are they talking about? CCS was first used in American oil rigs in the 1970s. In short, it involves extracting carbon dioxide gas at polluting power plants or industries, transporting it to a storage facility and injecting it deep underground in a special geological formation. Today, 17 large-scale projects operate around the world, storing 40 million tonnes ofCO2 underground annually. CCS has two main purposes; enhanced oil recovery (EOR), whereby CO2 gas is injected into an oil well to increase the reservoir pressure to extract more petroleum; and the permanent sequestration ofCO2 in deep saline formations. It is estimated that these represent 95% of the global CO2 storage resource, which could amount to several centuries of global present-day emissions. Why, then, has CCS not yet been massively implemented? The answer is that the practical obstacles to its wide-scale implementation have, so far, proved more substantial than its reported benefits. One of the major challenges is reducing costs the largest projects amounting to billions of dollars in investments and operation.

This is compounded by the efficiency penalty suffered by power plants equipped with CCS, which are often too significant to justify it without financial incentives. In addition, population acceptance has shown to be a crucial factor in the success or failure of CCS pilot projects. Concerns have been expressed over the environmental impact ofCO2 injection and the risks associated with induced seismic activity and leaks. Some may remember the 1986 ‘Lake Nyos disaster’ of Cameroon, in which the sudden discharge of a natural carbon dioxide cloud caused thousands to suffocate. Furthermore, there are ongoing legislation disputes in Europe to have CO2 reclassified as a commodity rather than a pollutant in order to enable its transport across borders. In spite of this, recent developments seem to signal a renewed momentum for CCS. The governments of India and Scotland have pledged to fund it, with others set to follow their lead. Three projects entered the operational phase this year; one being Australia’s Gorgon project, the world’s largest to date; China, whose power generation sector is largely reliant on coal, is leading the way in a new wave of projects and began the construction of its first plant in 2017; meanwhile Norway plans to turn CCS into a new pan-European industry within the next 5 years by collecting and storing European emissions below the North Sea. The challenge humanity now faces is to store 4 billion tonnes ofCO2 annually by 2040, or 1% of what is stored today. For now, in the IEA’s words, CCS remains “way off target”.

Biomimetics: A marriage of science, nature and… philosophy?

Ayesha Hashim explores how the interplay of science and nature might accommodate a philosophical dimension

The application of optimised, interdisciplinary and evolved systems, ideas and principles found in nature to facilitate the creation of products and materials: a textbook definition of biomimetics (also known as biomimicry). Whilst this suffices as a description of the general methodology, the broader essence of the term is better encapsulated as follows: a design and engineering philosophy that seeks to capitalise upon, and in some respects influence, the human powers of observation and perception.

Biomimetics, as a field, came to fruition in context of the study of nerve propagation (the way nerves transmit electrical impulses) in squid, at the hands of American biophysicist Otto Schmitt during the 1950s; this research engineered the ‘Schmitt trigger’, a device that converts analogue input into digital output. However, the very first product attributable to biomimetics is possibly the aircraft developed by the Wright Brothers in 1903—it remains uncertain the extent to which the technology was due to observing eagles in flight, but the connection is certainly compelling: four centuries earlier, da Vinci had produced illustrations of ‘flying machines’ modelled on bird anatomy and flight.

Past applications of biomimetic principles have been as transformative as they are fascinating: the creation of Velcro fasteners was inspired by the hook-like arrangements in cockleburr seed casings; certain bacteria-repelling materials used in hospitals and restaurant kitchens mimic patterns found on shark skin; the UV-reflecting property of silk spun by spiders to protect ensnared prey features on the exterior of certain buildings, helping to reduce bird injury. However, perhaps more notable is the evolution of biomimetics, from the imitation of desirable isolated features to utilising advancing science to create entire systems and structures. Current research within localised contexts (primarily drug delivery, tissue regeneration, medical imaging) is suitably advanced, reflecting this evolution, but the expansion of biomimetics to architecture and the management of environmental resources is a relatively new occurrence (consider Japanese bullet trains modelled on the aerodynamic kingfisher’s beak, the Helix Bridge mimicking DNA structure, artificial photosynthesis). This is a product of cumulated progress across the sciences and engineering.

As biomimetics weaves ever more intricately into the everyday and into pragmatic conceptualisations of the future—emphasising the gold standard of functionality and efficiency set by natural selection (a standard unsurpassable by human intelligence)—pertinent philosophical questions arise: how far does the appropriation of nature’s design principles increase our duties of earthly stewardship? I four anthropocentric approach to life (humans as the most important creatures of evolution) is no longer justifiable, does it threaten the right to self-determine? Discussion of such questions in Freya Matthews’ intriguing paper, ‘Towards a Deeper Philosophy of Biomimicry’, results in a proposal for achieving ‘biosynergy’ that urges a reconfiguration of our fundamental desires to align with those of environment. No doubt a radical suggestion, but it is one that highlights the part incredible, part devastating potential of biomimetics to supervene on human society and thought.

Spotty Stars and Exoplanets

James Miller considers the role of spotty stars in our search for habitable planets

Have you ever wondered if one day humans could live on another planet? Until the discovery of exoplanets, some thought our Solar System was the only one with planets. The Kepler telescope quickly changed this view, discovering countless different families of systems. As it turns out, our meagre system is quite unique compared to the zoo of others out there. Some have huge ‘Hot Jupiter’ planets whizzing around close to their host stars at unimaginable speeds. Others have strange half-and half-worlds where one side of the planet is tortured by its star while the other is left to freeze, facing the emptiness of space. It becomes a real struggle, then, to find any potential habitable planets. How can spots help us with this? It turns out that studying another star’s spots may give us insight into the potential habitability of its exoplanets. The Sun has a major effect on the Earth’s climate—by extension, so will other stars on their respective exoplanets. But what are these ‘spots’? We have known about sunspots for a very long time. It’s not that the Sun is unwell, nor has it recently tried some bad skin cream. Unfortunately for the Sun, its spots are unavoidable and are caused by variations in its magnetic field. The important thing to know about these spots is that they are cooler than the rest of the Sun’s surface (despite still being some 3800 degrees Kelvin!). Many solar flares and storms originate from sunspots, so it’s crucial they’re monitored. From observing these spots and various other changes, we know that the Sun has a roughly 11-year cycle of activity.

The spots form at high latitudes, slowly moving inward over time until disappearing as they reach the Sun’s equator. These movements can be plotted with time to produce a ‘butterfly diagram; this gives a deeper insight into the Sun’s magnetic activity lifecycle. It is believed that the change in the Sun’s spot activity even contributed to the ‘Little Ice Age’ experienced in the 15th century. So, do these spots exists on other stars? And if so, how can we track them? Despite these being perfectly reasonable questions, they are incredibly difficult to answer. The light we receive from other stars is represented by a few tiny pixels on a screen, rendering it impossible to distinguish any details. There have been successful attempts, however, at indirectly observing other stars’ spots, so we know they exist. These techniques vary from detecting slight variations in temperatures over time, to measuring small differences in spectral observations of the star’s light. But these techniques cannot provide important information about where the spots are on the surface, or how they move—here come the exoplanets! As a planet transits in front of its host star, it blocks out some of its light. This causes noticeable dips in brightness, which can tell us all sorts about the planet and its star. Now, imagine the star has spots. A planet that transits in front of a spot will give a slight rise in these brightness dips, as the spot is cooler and hence less brightness is blocked. That little bump in the spectrum is how a star spot can be tracked. There are, of course, many other complications, but this is the underlying principle.

By plotting the butterfly diagram for the star, we can then begin to analyse its effects on surrounding exoplanets. One day, this may enable us to find the perfect host star for a new human colony!

Geoengineering: A radical way to fix the climate or an imminent threat to humanity?

Nicholas Folidis explores how geoengineering could save our planet

It was November 2015 when delegates from 195 countries gathered in Paris in light of the United Nations Climate Change Conference. On December 12, of the same year, all parties finally reached a consensus; a landmark agreement called the ‘Paris Climate Accord’. Simply put, the Paris Agreement is a pledge by all UN member-countries to reduce their emissions and greenhouse gases in an effort to prevent the Earth’s average temperature from rising and maintain it well below 2oC above preindustrial levels. The Paris Agreement may be one of the greatest, and most ambitious, diplomatic victories in history, but it is far from perfect. Scientists believe that the temperatures are highly likely to breach the 2oC threshold and reach as high as 2.7oC or even 3oC above preindustrial levels. Emission caps are still too loose and, on top of that, the agreement is not legally binding, meaning that signatory countries can easily withdraw from the pact (just like the United States earlier this year on June 2017). Perhaps we need to take more radical measures to protect our environment, or at least consider a ‘Plan B’ before it is too late. Geoengineering, otherwise known as ‘climate engineering’, is the rapidly emerging field concerned with counteracting the effects of climate change, or even averting global warming altogether, through deliberate large-scale intervention on the Earth’s climatic system.

The truth is that geoengineering can be as simple as planting a lot of trees to decrease CO2 levels in the atmosphere or as complex as seeding heat-trapping clouds with ice crystals to drastically cool them down; or even spraying sulphate particles into the stratosphere to mimic the cooling effect of volcanoes, where, combined with water vapour they would create a haze that in turn would surround the planet and reflect roughly 1% of the sunlight away from the Earth. There are plenty of other techniques proposed, such as deploying massive space shields to deflect the sun’s rays, increasing the reflectiveness of clouds and, in many cases, crops to reflect heat back into space, or even refreezing parts of the Arctic that have been affected by climate change and fertilising the ocean with iron fillings to stimulate CO2-eating plankton. Such strategies have been at the centre of scientific discussions for many years but due to the severity of the subject those conversations mainly took place behind closed doors until only recently. Opinions continue to differ and the science is not settled yet. Scientists like Canadian solar geoengineering expert, and Harvard professor of applied physics and public policy, David Keith, are frustrated by the so-far slow response of countries and governments worldwide to cutting emissions and believe that geoengineering approaches are our best bet in reducing some ofthe climate risks that come from the accumulated carbon dioxide.

Comparatively, scientists like Pat Mooney, of Ottawa-based ETC Group, an international non-profit organisation monitoring the effects of emerging technologies, oppose such intervention. Mooney is afraid of not only the unforeseen impact of such techniques but also the possible recklessness with which governments and big corporations are going to address the issue. As promising or pioneering these approaches may sound, their effects remain still unknown. Clear guidelines and safeguards for research must be set first to ensure scientific integrity and safety. According to a report on climate intervention experiments, published by the American Chemical Society, it was recommended that “countries establish international governance and oversight for large-scale field tests and experiments that could significantly modify the environment or affect society” in order to avoid causing irreversible damage to the climate and the environment. In fact, the Royal Society is currently in collaboration with various organisations trying to develop such guidelines to ensure research is carried out in an environmentally responsible manner. Surely geoengineering is an exciting and promising field but also equally controversial that, for the time being, should only act as our last resort.

Biomimetics: Holy Mother of Pearl!

Marriyum Hasany discusses emerging developments in biomimicry

Natural selection has allowed organisms to evolve into having various systems and attributes that have a great deal of efficiency and resistance towards harsh elements of the environment. Hence, scientists have been looking towards such systems to imitate their properties in synthesising new materials—a science referred to as biomimetics.

Examples of biomimetics are abundant, from the design of velcro being inspired by cockleburs attaching themselves to a dog to looking at the hydrophobic properties of lotus leaves to create a sealant with analogous properties. Now, research is being conducted to understand the structure of nacre, also known as Mother of Pearl, to eventually synthesise nacre-mimetic materials which would have the sought after mechanical properties of their inspiration. Found in the inner layer of mollusc shells, nacre has a rather aesthetically pleasing iridescent look that made its use historically popular in architecture, musical instruments and other decorative items. However, scientists around the globe find this material more fascinating for its strength and light weight rather than beauty. Nacre is an organic-inorganic substance, made of polygonal aragonite (calcium carbonate) nanograins and ductile biopolymers, which respectively make up 95% and 5% of its volume. The aragonite nanograins are held together to form a single layer of nanoplates, and each plate is layered upon each other, with the adhesive biopolymers fastening the entire structure together. This forms a rather “brick and mortar” structure that gives nacre most of its strength, measured to be three times that of calcium carbonate.

Research into nacre-mimetic synthesis is warranted due to its desirable properties including its high tensile strength, toughness, fracture resistance, lightweight and sustainability. In order to synthesise a nacre-like material, six main categories of techniques have been developed: conventional method for bulk ceramics, freeze casting, layer-by-layer deposition, electrophoretic deposition, mechanical assembly and chemical self assembly —each with its own advantages and disadvantages. Nacre’s properties allow it to have several applications, especially in biomedicine—it can be injected into bones to amend defects in bone substitution, and be used as a coating for metal implants. Additionally, when added to soft materials it increases their strength and elasticity without compromising on mass, and therefore can be used in construction and aerospace engineering. Nacre-mimetic materials could be a valuable resource, so understandably many scientists are tirelessly researching techniques to improve and accelerate synthesis of the material. Who knows how long before this, technology with origins in molluscs found by the sea, may be employed on a much larger scale.

How Nature Connects the Dots

Jonny Wise muses on the parallel networks in life

In an ever-growing and modernising society, connectivity becomes more and more important. Not only do we rely on it as individuals but also entire infrastructures depend on it. We live our lives – often without realising it through interactions with a multitude of both natural and artificial networks. From the network comprising our social circles to the complex neural network that makes us conscious; from the circuitry built into our computers to our vascular circulatory system; and from the veins in the leaf of a tree to the Internet. In academia, a network is anything represented by a set of nodes and connections. The goal of a network is to transport something between the nodes according to some overriding optimisation, which may be dynamic. For example, when designing a network of roads, the nodes are junctions, and the edges are the connecting roads, while the goal is for vehicles to travel as quickly as possible from one node to any other node in the network. Of course, restrictions arise since vehicles may not occupy the same space, so one has to think about how to reduce flow on popular routes. One may think that the obvious solution is just to build more roads, however this is not always beneficial and may actually result in increased journey time – see Braess’s paradox. The true significance of network science becomes apparent when its universality is considered. The above example is not particular to traffic; it does not matter whether we consider cars, blood, or even information.

Similar sorts of features may be observed to produce certain properties such as flow efficiency, robustness to link loss and energy cost in production. It is often this principle that motivates and justifies the study of very specific and niche networks. The study ofnaturally occurring networks in biological systems is of particular interest since they are often the product of millions of years of evolution. These networks are studied since they may be worth mimicking when artificial networks with certain characteristics are desired. For example, physicists at Technische Universität Dresden are investigating the liver as a system of two non-overlapping networks of pipes – one that transports blood to every cell and one that transports bile away from each cell. The networks are classified and compared based on statistics such as average link length, channel width, junction planarity and ‘loopyness’. It’s possible to simulate events such as link impairment due to alcohol and observe how the network re-routes in the most optimal way. Researchers hope that this study will not only lead to better biological understanding of the organ, but also offer motivation for the design of modern supercomputers. This is just one example of where a very specific area of natural science is being studied with the confidence and expectation that the findings may be far-reaching. The recent explosion in artificial intelligence technologies is another testament to the advancements being made in network science and will unequivocally affect how we interact with machines. Existing within nature we are fortunate to have the universe as a playground for scientific analysis and, ultimately, inspiration when building new things.

The Salton Sea: California’s Wasteland

John Dunsmuir reports on a disastrous consequence of mankind’s interference with nature

The Salton Sea is a 900 km2 lake located in the middle of the Colorado Desert of California—but its existence is the result of a complete accident. In 1900, the California Development Company attempted to divert the Colorado River, in the hope of fertilising the arid desert and creating an agricultural basin. Hence, the 23 km Alamo Canal was built. At first the plan succeeded; the Salton Sink became fertile and crops were planted. However, the water received was from the highly saline Imperial Valleys, and the Alamo Canal became filled with silt. Attempts were made to alleviate blockages and divert the canal, but to no success. The winter of1905 caused damage, as heavy rainfall and snowmelt caused the canal to swell and burst. For the next two years, the entire Colorado River flowed freely into the Salton Sink, filling it with water—an ecological disaster had begun. Governmental lawsuits against the California Development Company’s mismanagement ran for a decade. But where they found disaster, others found an opportunity. As the engineers left, developers moved in. They built houses, roads, schools, and all the other creature comforts of a modern society.

One advert has described the Salton Sea developments as “a Palm Springs with water” and “a miracle in the desert”—for many years, this was true. However, the only in flow of water was the agricultural run-off from ancient salt deposits, which would result in increasing salinity and pollution. A body of water in the desert this large was a recipe for disaster. By the 1960s, evaporation caused the water to become saltier than the sea. Even the most resilient fish populations, introduced in the early 20th century when the lake was deemed as fresh, were beginning to suffer. Massive die-offs occurred as tens of thousands of fish washed up on the shore each year. Beaches became filled with crushed fished bones, with a smell described by the US Geological Survey as “noxious” and “objectionable” as they began to rot. Temperatures often reached 48°C, making the air humid and unbreathable, while fertiliser run-off caused eutrophication (excessive enrichment of a body of water with nutrients). This led to an increase in algal blooms and bacteria levels, which posed a health risk to the local populations who had migrated to this desert haven. Furthermore, the location of the lake over the San Andreas Fault resulted in mudpots and even mud volcanoes, turning the landscape into a bubbling, hellish environment. During the 48°C summers, there was also uncontrollable flooding. Residents of the local towns were forced to flee their homes, often abandoning belongings. What were once miracle cities, were now becoming ghost towns.

Today, only a few thousand residents remain, with around 30% living at or below the poverty line. Roads sit named, waiting for developments that never came. The only tourism is from those interested in seeing a real ghost town. The lake itself has served as a reminder: to remain responsible with nature, to not put profit above wellbeing, and to consider the unconsidered consequences o four actions.

Nature’s Very Own Nuclear Reactor

Bethany Rothwell looks back on how Mother Nature beat nuclear scientists to the punch

Mankind has been harnessing the power produced by nuclear fission since the 1950’s. While this may seem like a man-made concept, it turns out that nature got there billions ofyears before us! In 1972, it was discovered that a mine in Oklo, Gabon, is home to 17 natural nuclear fission reactor sites that operated nearly two billion years ago. The discovery was made when scientists found that the mine contained significantly less fissile uranium than expected during a routine mass spectrometry. With this being such a rare and useful fuel, they were keen to know where the missing 200 kg (enough to make 6 nuclear bombs) had gone. As it turned out, the uranium had been used up in natural fission reactions over the course of several hundred thousand years. Nuclear fission occurs when an unstable nucleus splits into smaller parts, releasing huge amounts of energy. The most common fuel is the isotope uranium-235. Upon impact by a neutron, a uranium-235 nucleus fissions to produce two lighter nuclei, plus a few more neutrons; these then go on to cause further fissions in a chain reaction under three main conditions. One: Enough concentrated uranium is present to allow for a self-sustaining reaction. Two: There is a significant abundance ofuranium-235 within the sample. Three: A moderator, such as water, is available to slow down the neutrons—if they’re travelling too fast, it’s unlikely they’ll be absorbed.

Over the decades, significant research has ensured nuclear reactors meet these conditions perfectly—this shows how remarkable it is that Mother Nature did it all by herself! So how was this achieved? When the Earth was formed, there were no significant uranium concentrations—the sandstone in Oklo contained only tiny samples dispersed amongst its layers. When the atmosphere became saturated with oxygen two billion years ago, that uranium was transformed into its soluble oxide. Once water could seep through the sandstone, the samples were dissolved and became mobile, resulting in deposits concentrated enough to satisfy the first condition. A natural fission reactor couldn’t operate today, due to the insufficient natural uranium-235 concentration of 0.72%; for uranium-235 to be used as a fuel source, power stations often enrich it to achieve a concentration of2-4%. However, when the Earth was younger, this abundance was higher—the relatively short half-life ofuranium235 has caused its fast decay.

When the Oklo reactor began fissioning all that time ago, the concentration was a healthy 3.6%, thus fulfilling the second condition perfectly. The final condition was met by oxygen-bearing water that acted as a suitable moderator, allowing fission to occur until the reaction exotherm caused it to boil away. Fission could only then begin again, once the water had cooled enough to flow back into the reactor. This continuous, stable cycle allowed the reactor to continue producing energy for such a long period (similar to the negative feedback mechanism used to keep modern reactors safe). It’s clear that this natural reactor was remarkable. Perhaps even more astounding is its ability to safely contain the radioactive products underneath Oklo for nearly two billion years. With investigations on the effects ofnuclear waste disposal on its surrounding areas taking place, Oklo mine emerges as the best long-term study scientists could have hoped for. There is no doubt that mankind can learn a lot from this incredible discovery—it seems that Mother Nature still has lots to teach.

CRISPR/Cas9 and its Natural Inspiration

Jayde Martin highlights the role of evolution in developing the genome editing tool

What’s natural about genetic engineering? That’s the first question I hear you ask. I would like to argue that it is, indeed, nothing short oforganic. CRISPR/Cas9 is a unique technology that enables geneticists and medical researchers to edit parts ofthe genome by removing, adding or altering sections ofthe DNA sequence. Significantly, its inspirational origin is based on that ofa type ofmutational change – a natural development consequently, this type of genome editing can be classed as ‘natural’. CRISPR/Cas9 is modelled offan entirely organic process in bacteria: scientists have learnt to utilise the adaptive immune response of Staphylococcus aureus to a viral infection, as a genetic modification template. This is important to help the human species overcome predisposed genetic conditions that could lead to rapid degenerative decline and early death. Here, we have a case ofa scientific technique derived from a natural process, to further manage and expand human life expectancy. For this alone, I would like to state that genetic engineering is, in fact, natural.

The immune response ofa bacterium, such as S. aureus, to a viral infection is the result ofprokaryotic evolution. The bacteria create two RNA strands, one ofwhich mirrors the DNA sequence ofthe virus in question. These two RNA strands then form a complex with Cas9, which is essentially a nuclease, the cut and paste enzyme of the biological world. Cas9 takes a section ofthe viral DNA, severs it, and then matches the RNA to the viral DNA. It essentially robs the virus ofits original DNA, without which the virus cannot replicate. Cas9 and its mischievous RNA strands have a 20 set base pair to match the viral DNA–many ofthese Cas9 enzymes will take different sections of the virus to fully incapacitate it, by snatching strands ofits entire DNA sequence. So how does this relate to the manipulation and mutation ofthe human genome? Instead of20 base pairs ofRNA that matches virus DNA, Cas9 can be used to target 20 base pairs ofthe human genome, replicate it, and cut. Controlling what and where CRISPR/Cas9 cuts is how we exploit this natural process as a tool for our own means–just like we did with fire and the invention ofthe wheel.

The prevailing fear of changing our genes is arguably outdated, so the important question to pose is: why does our control over it scare us? Yes, there are fears ofneo-eugenicism (an ideology concerned with improving a species, through influencing or encouraging reproduction with parents that have desirable genetic traits). However, through awareness and consideration of disability studies, identity politics, and even the study ofpost-colonialism, we, as the next generation ofresearchers, can avoid the mistakes ofthe past. It is time to change the way in which we perceive genetic engineering. Shedding the image that dystopian science fiction has painted it to be, I believe we can make it something different. We can inclusively adapt genetic engineering to our advantage: its potential application in genetic therapies is promising for carriers of genetic disorders, such as phenylketonuria, cystic fibrosis and sickle cell syndrome. Instead ofthe messy idea oferadicating ‘disease’, we can develop a genetically-diverse spectrum ofindividuals, and reinstate the right to a full and longer life in individuals who would otherwise succumb to genetic disorders. Genetic modification should always be considered alongside identity politics and ethics. But instead of blindly fearing advances in biotechnology, we should opt to utilise it sensibly to improve the quality of living–after all, it is ofa naturally occurring process!

Branching Out to Nature

Marion Cromb uncovers the logic behind the branching patterns of trees

 

Have you ever noticed that on trees small twigs tend to stick out from thick branches at right angles, but branches of the same size split from each other at smaller angles? This is not a feature that is unique to arboreal branching, and in fact can be understood with a model for the human vascular system: Cecil D. Murray’s physiological principle of least work. Work is the energy transported by a force, so this principle is about finding the configuration that expends the least energy. In other words; nature is lazy. Just as the arteries Murray studied transport blood throughout the body, we can model tree branches as a transportation network for water. Moving fluid along a narrow tube encounters larger frictional resistance (and thus takes more work) than moving fluid the same distance along a wide tube. So, to move from one point to another, taking a direct route in a small channel can be a lot less efficient than taking a longer, less direct route along a large channel then a short perpendicular hop in a narrow channel.

If two equal sized branches fork off the trunk on opposite sides, they do not deflect the trunk and emerge at the same angle. If just one branch emerges from the trunk, this will deflect the trunk, often considerably. Depending on the relative diameter of the trunk, branches come off at an angle between 70° and 90° to the original trunk, and the trunk is deflected between 0° and 90°. To have a network that fills the available space efficiently (e.g. to collect the most sunlight), it is necessary to minimise the length of inefficient narrower channels (that can fill gaps between larger channels) whilst minimising the overall material used. This results in the branches that ‘feed’ the biggest areas being the thickest. Leonardo da Vinci observed that as a rule of thumb, the cross-sectional area of a branch is equal to the sum of the areas of the branches it splits into.

Of course, the trees themselves have not predetermined a high efficiency network to grow into, but grow in a modular fashion, obeying the same simple rules of cell division in the meristem at every stage of growth. But despite this fixed process trees don’t turn out as completely uniformly repeating structures because environmental factors come into play, for example competition for resources such as sunlight, and twigs snapping off in the wind. Those trees with growth rules that combine with these external factors to create efficient branching networks are those that evolution favours and that we see thriving today. Branching is a result of very different mechanisms in many different physical phenomena: Lichtenberg figures, lightning and river systems to name a few. These branched networks all have similar properties and statistics to purely mathematical networks generated by random numbers, hinting that effective branching patterns are more dependent on the geometry of space itself than the processes behind them.