Sunday 24 July 2016

You are wrong about nanotechnology

Nanotechnology. The foremost buzzword of the 2000s, holding the promise of a technology that could make anything, with little input other than raw materials. A liquid factory, in a sense. Others saw it as the precursor to the singularity, with resource scarcity becoming a thing of the past.

Let's take a closer look at this mainstay of science fiction.

Medical 'millibots' from IEEE depicted as the size of blood cells
Nanotechnology has had a long and fruitful history in science fiction.
The first mention of microscopic machines in fiction dates back to 1931, and the first use of 'nano' technology goes back to 1937. Nowadays, nanotechnology takes center stage in movies such as The Day the Earth Stood Still and Star Trek: First Contact. Generally malovelent, frequently breaks out of control, but almost always capable of feats impossible for modern machinery to accomplish in similar timeframes.
The T-3000 from Terminator:Genisys
For example, in the Terminator franchise, nanomachines can act as cells with near-magical magnetic and matter manipulation abilities. In GI Joe, nanomachines were a weapon that could reduce a city to rubble in a matter of hours. For many, they are the technological imitation of biological horrors such as viruses or insects. For others, they are the ultimate evolution of the production line machine. 

So, how will you handle nanotechnology in your science fiction? How realistic are the abilities you've granted it? Let's first understand nanotechnology as it is today, the challenges it faces, and how to design nanotechnologies that work towards their true strengths.

Understanding nanotech 

By definition, nanotechnology deals with the domain of the nanometer, or a billionth of a meter. At this scale, atoms and molecules can be manipulated. A hydrogen atom is a quarter on a nanometer wide, DNA is 2nm and the smallest bacteria are massive, at 200nm. Electrons are so close together that quantum effects play a significant role. 
A logarithmic scale for size
The most important forces at this scale are electromagnetic. Particle mass and momentum are utterly dwarfed by the forces they wield. The surface-to-volume ratio is materials at that scale is unimaginably large, so their properties can be completely alien in comparison to their behaviour at the macro scale. 

This leads to non-intuitive solutions to tasks we consider simple in our world. You'd imagine that a pincer could take hold of a molecule the same way a macro-scale device would pick up an item. In reality, it is the ionic interactions at the surface of the manipulator and the object that bind them together. 

A protein molecule in water.
Another misconception is about molecules themselves. They are not the rigid ball-and-stick models you learned about in class. They are in fact pinpoints of charge brought together by electron interactions, that bend, vibrate, twist and spin according to degrees of motion that chemists spend lifetimes uncovering.  They are not physical objects, in the sense that they do not have proper, solid volumes. In many ways, it is easier to see them as clouds of atoms that behave in a certain way. Of course, there are differences between the enormous enzymes and the tiny water molecules they bathe in.

A particle's motion on water, called Brownian motion.
A final point is the landscape. 

Gas molecules fly around at near-supersonic velocities. Fluids are the brutal ball-pits of nature. Solids are just molecules that violently shake in close formation. Interfaces between these phases are even more chaotic. A solid-gas interface consists of an expanse of wobbling jelly being beaten by what would resemble a dense, unending hail of gas molecules. Fluids such as water constantly evaporate and jump out of their medium, meaning the interface is much rougher and uneven than a still glass of water would suggest. 

So what is there to understand?

It is that nanomachines act in a world completely different from ours. You cannot simply replicate something very simple to do on our level by using the same tools in the nanometer scale. Conversely, selecting one specific molecule out of a billion others can be trivial. 

Also, just like the natural world, interactions at the nanoscale have a massive variety of effects and consequences. This is reflected by the number of domains of study nanotechnology is involved in, ranging from nano-materials to atomic manipulators and quantum dots.

Today's progress
Gears made out of graphite tubes.
We have been studying interactions at the nano-scale since before Robert Brown supposed that his pollen particles were being moved around by water molecules much smaller than he could observe.

Progress has been impressive in the domain of nanomaterials research, with applications in structural components, batteries, electronics and medicine. These can be used in bulk, to draw out their effects at large scales, or as agents that improve on the function of more regular mechanisms. 

Others, such as the microprocessor industry, build macro-scale machines from the nano-scale and up. The individual building blocks can be bulk material cut down and shaped into items that work on the nano-scale, or they can be as complex as nano-actuators and nano-sensors. 

Most interesting is research into bionanotechnology, that uses nature's own nanomachinery in artificial roles, and molecular self-assembly, that could lead to the fabled von-Neumann machines.

What are the principal advantages of nanotechnology?

A simple diagram of how enzymes work
Nanomachines are able to act as programmed proteins and enzymes, which grants them similar abilities to their natural counterparts. One of their most impressive abilities is their specificity. They are able to reliably perform the exact same action on the exact same molecule endlessly, which is incredibly important in biological processes. Artificially, it can be used to act on a specific substrate within a mixture without going through a time and energy consuming separation process.
A reaction between Carbon Monoxide and Oxygen trapped in a solid.
Another useful characteristic of nanoscale interactions is the rate they happen at. Due to their very large surface-to-volume ratio, nanomachines would be permanent contact with hundreds to thousands of their target molecules at any one time. Combined with the fact that a chemical bond can be formed or broken in a trillionth of a second, nanomachines will be able to start their reactions very quickly.

Depending on the type of nanomachine, it can trigger a self-sustaining chemical reaction, or like most biological systems, act as an enzyme that facilitates specific reactions that would not happen in normal conditions. For example, peptide-cleaving enzymes of an E.Coli bacterium increase the rate of reaction of their substrate from once in 400 years to over 100 per second... per protein.

E.Coli bacterium
Nanomachines travel very quickly within their medium. Simple diffusion depends strongly on pressure and temperature, but in most cases is several meters per second in gasses, down to a few centimeters per second in liquids. This might sound low, but a large nanomachine would travel ten billion times its own length each second, and a pin-head size of nanomachinery would similarly expand its volume by three hundred million times.

At macroscopic scales, this would correspond to a car-sized robot travelling at  133 times the speed of light, or a factory blowing up to the size of the Earth in about a millisecond. 


A final advantage of nanomachinery is the ability to act in places no other sort of machine can reach, such as the interior of cells or the microfissures that signal the beginning of serious structural failure.
   

So what's the problem?


Nanomachines from Star Trek...
Nanomachines face a large number of very hard to solve problems. They are most significant for the traditional image of 'nanites', or nano-scale robots that replicate familiar roles at smaller scales. Here is a short list of the major challenges:

Heat


Nanomachines do not escape the effects of temperature variation. 
Nanomachines will likely be very frail and require specific, controlled environments to work properly. They are victims of their surface-to-volume ratio, which causes them to lose a lot of heat during normal operation. If they grow too cold, they lose their incredible ability to move quickly and react with many molecules at once. Worse, they might fall under the threshold required to activate specific reactions, holding up the production line. In nature, the production line produces energy and keeps the cell alive, but this might not be the case for nanites.

The opposite is true, with nanomachines heating up in lock-step with their environment. Just like proteins, the molecules that form the nanomachine will vibrate more vigorously as temperature increases. If it increases too much, they will break out of their bonds and the nanite will unravel. 


Proteins de-nature after a mere 20 degrees Kelvin increase over their optimal temperature, where the weaker hydrogen bonds are broken. Stronger bonds are broken at higher temperatures, but a nanomachines is only as strong as its weakest elements. This is especially important in a space environment, where a nanomachine on a spacecraft's hull can be frozen or fried in seconds.
 

Contaminants:

An allosteric site is where inhibitors bind to an enzyme.
If a nanomachine is designed to react with a specific molecule, like an enzyme with a specifically-shaped reaction site, then a very similar molecule might get stuck and render the nanomachine useless. Alternatively, a very reactive molecule or another, natural enzyme, might react with the nanomachine itself,  breaking it. In both cases, nanomachines end up being vulnerable to 'poisons' that can quickly halt their function. This might negate a nanomachine's ability to be used in a non-controlled environment. It is especially important inside living beings, that have multiple mechanisms designed to filter out nanomachines.

Energy:


How do you deliver power to a nanomachine? Chemically? How do you make sure that your fuel won't interfere with the substrate? Through direct radiation, such as microwaves? Then how to evenly deliver the energy without frying the nanites on the surface and yet deliver enough energy to the deepest agents? 
Microscope image of the trajectories of gold-ruthenium nanomotors powered by ultrasound.
Extracting energy from radio waves or from a magnetic field would have to be the easiest solutions, but they each have a certain set of limitations. The biggest is energy storage: nanomachines have very small internal volumes.

Data:


Nanites will become most useful when we can control their actions quickly, reliably and while they are working. The problem becomes how to deliver our instructions to the nanites.

First of all, nanomachines can only be equipped with incredibly small receivers. They have literally only a few atoms to understand, hold and process the data received. They then have to transmit this data to the other members of a swarm, and coordinate their actions. The latter is extremely important, as nanites unable to tell each other something as simple as 'I have completed task A' can lead to multiple nanites attempting to repeat useless tasks. 
Once 'top-down' communications have been established, and nanites are sending data from one edge of the swarm to another quickly enough to matter, there is still the problem is returning data to the user: 'bottom-up' communication. Chemical markers can be released, or switches used that create signals in a magnetic field, but they'd have to be read and decoded quickly enough to matter.  

Simple solutions:


Why unleash a swarm of nanites on a bent piece of metal when you could straighten it with a hammer? How could you design a retro-engineered mecha-virus to deliver a vaccine better than a simple injection? You must take care to think about trying to solve your problems with simpler, more effective machines rather than trying to apply nanotechnology to every situation.

The solutions

As always on ToughSF, we'll end off with solutions instead of problems. 


The majority of the problems raised in the previous section can be solved with sufficiently advanced technology. If you are creating a setting hundreds of years into the future, then it becomes more of an issue of finding situations where nanotechnology has not yet been applied. In settings less far off, it is useful to think of ways to apply nanotechnology in suitable ways.


The first solution is to directly reduce the vulnerabilities. Temperature resistance can be increased by protecting the nanites in a conductive shell with an insulating interior. Alternatively, take inspiration from viral spores and provide the nanites with a temperature-resistant 'hibernation mode'. Contamination effects can be reduced by coating the nanite in a non-reactive layer of metal, and increasing the number of verification checkpoints before a nanite is allowed to act on its substrate. Bottom-up communications can be handled by relays that nanites build as they move away from the surface. There are many more ways to handle these vulnerabilities. 

Another solution is to instead accomodate the nanites. 
This includes creating a controlled environment, with stable temperatures and free of any contaminants. Energy input is handled externally, and any wastes or by-products the nanomachines produce is removed continuously. As the nanites do not have magical abilities of adaptation, multiple models will be injected into the controlled environment, one after the other, to accomplish specific tasks before they are removed.
DNA replication in real time
A very common characteristic, ubiquitous in any mention of futuristic nanotech, is self-replication. It is the incredible ability for a handful of nanites to build endless copies of themselves. Due to their exponential expansion, all you would need is a small 'seed' of nanomachines and suitable materials to have at your disposal a huge swarm of agents able to accomplish any task quickly. However, the true power of self-replication in this case is the ability to replace damaged or inactivated units. If the nanites are able to replicate quickly enough, they could eliminate contaminants through sheer attrition. Alternatively, it would allow nanites that have entered hibernation due to temperature variations to rebuild their numbers in more clement situations.

Of course, relying too much on replication without any corrective external input will inevitable lead to the production of mutated or non-functional nanites. Even our own bodies, with multiple checkpoints for DNA replication and dozens of verification and repair mechanisms, produce hundreds of cancerous cells per day. 
The vast majority of depictions of nanites are actually at the micro scale.
Nanomachines have many problems associated with working at such small scales. One way to avoid those problems entirely... is to work at the micro scale. With the individual agents a thousand times larger, you will return to the well-understood and relatively mature domain of micromachines. Micromachines are common in high-tech industries today, and it becomes much simpler to posit that advances in nanotechnology applied to micro-scale machines have allowed the creation of the self-replicating all-capable swarms from science fiction. They would be much more temperature resistant, easier to handle, much easier to communicate with and give order to, and less vulnerable to chemical or electromagnetic damage. They would even be easier to produce. 

Alternatively, do what scientists have done for hundreds of years: imitate nature. Bionanotechnology is a new field of study that aims to use nature's own nanomachines to do the work of their artificial counterparts. Engineered viruses, nano-actuators build from myosin, data stored in DNA... the 'nanites' in your setting could be no more than bacteria modified in a laboratory. You would gain the use of all the protection, repair and replication mechanisms that have evolved for the survival of the original product without additional research. Even better, the huge variety of existing bacteria, viruses and cells allows you to mix and match the characteristiucs you wish to obtain, instead of developing them from scratch.
DNA-based nanomachines are already being used to speed up HIV identification.
Finally, there's the option to 'give up'. No magical nanites. No one-stop-shop nanotechnology. Maybe you decide that the technology is too advanced for your setting, or the existence of typical SF nanotech will alter industry and society in ways you do not want. Or maybe it's as simple as the fact that you've come to appreciate nanotechnology as what it really is: a tool to use in situations where larger machines cannot do their job. The fad has already passed, and we'll soon see it in the same light as 'electric', 'atomic' and 'cyber'. 

30 comments:

  1. The greatest problem with nanotech for a writer is the possibility that as a material that can be both transmutory and inert it could theoretically be used for anything. Floors, clothes, tank barrels... It could be used for anything with the added advantage of being capable of transforming into something else. This naturally creates large headaches for writers.
    Those that try and create a get out clause by pointing out that omnipresent nanotech could be programmed to create havoc in society and thus must be limited ignore the cold truth that defences against viruses are generally superior to the viruses themselves.
    Thus there is only one solution to this, which forms the basis for my question: Would nanotech have a weaker grasp on itself than more rigid materials? I.e.: would I get away with more inert material for tank barrels, car engines, etc, even with the solutions you have posited above?

    Secondly, if a forger used nanotech to create multiple Mona Lisas, would it be possible to identify the real one at the atomic level?

    ReplyDelete
    Replies
    1. I don't think it is physically possible to quickly re-shape the substrate matter into a wide variety of different shapes, from the nano-scale up, without a huge waste heat output.

      At the nano-level, you are breaking and re-forming chemical bonds. The stronger you want your resultant object to be, the more energy you have to put in to break the bonds.

      It is vastly more efficient to use macro-scale components that you move around. This can be done from milli-scales and bigger, so it would be the work of microbots, not nanobots.

      The other option is to do it slowly, like a plant growing into new shapes.

      As for structural rigidity: nanobots are individual machines that are not supposed to interfere with each other's movement. That means they do not form any bonds which would possibly stick them together. The result, optimally, would be a fluid-like substance more in common with rice than a solid. So yes, materials held together by nanobots are necessarily weaker than solids connected together by strong chemical bonds. To get around this, you have the nanobots leave behind a regular solid, which they break down before re-shaping.

      Mona Lisa: you'd have to analyze the original down to the atomic level to recreate it with the same accuracy. The only way to do that is destructive ;)

      Delete
  2. A great article. Thanks for putting the work in to make it sensible and exciting.

    ReplyDelete
  3. Considering that we are constructed from and surrounded by micromachines, you would think that extrapolating to realistic nanotech (which, as you point out, is almost always actually microtech) would be easy.

    Then again, zombies as a cultural phenomenon seem to show that our intuitive grasp of biology isn't that hot either.

    ReplyDelete
    Replies
    1. We might end up with nanomachines anyways, if technology advances to the point where our wildest SF imagination fails to grasp their capabilities. By that point, everything else would have advanced by a similar degree, which might make nanotech not that special anymore!

      Zombies, I believe, are mostly the result of our ability to enjoy fantasies without our logic getting in the way.

      Delete
  4. True, but you'd think that the idea of dead people running around would trip people's suspension of disbelief circuits pretty hard. It doesn't.

    We generally treat biology as part magic and part mundane, so we frequently make strange errors of thought about biological conceps (people often expect damaged machines to heal if turned off or left alone, for instance).

    ReplyDelete
    Replies
    1. Zombies are a bit of a special case, in my opinion, as it can take several forms.

      On one end of the spectrum, you have an extension of mind-altering diseases like rabies: a deadly infection that drastically changes comportment by turning into an enraged monster, in order to better propagate. Sometimes symptoms appear unrealistically fast (though not always, some version have an incubation period in days), but otherwise it is not so different from actual real-life diseases.
      In those cases, the zombies aren't dead (though they are often beyond hope), so they can be killed by bleeding out or destroyed internal organs like any human. However they will still go berserk like someone on hard drugs, so apparently deadly wounds (like a gunshot in the lung) won't seem to faze them (initially), giving an illusion of invulnerability. If they are to last once infected, they will also have to retain enough survival instinct to scavenge food and water, and even then probably die after a few weeks at most.
      For what I heard, rabies is terrifying.

      On the other end, we have outright magical origins, with curses, possessions or mystical practices. Suspension of disbelief is maintained because, well, magic. The most famous example is possibly Evil Dead.
      This works even when the origin is not stated but can still be inferred to be supernatural.
      For what I have heard, the origin of the modern TV zombie may be a Voodoo practice of drugging someone into a death-like torpor, who is then buried, and then comes back to their senses one or two dozen hours later as the drug wears off, digging themselves out of their graves.

      Between those, there is the unnatural disease. Bioweapon, experiments Gone Horribly Right, even sentient virus or alien invasion. In those cases, we are in soft-SF territory, so superscience or alien tech allow to ignore how illness or even biology is supposed to work.
      This also works for unknown origins, where zombies simply start appearing. In this case, even if magic or superscience is not visible, the reader/viewer can assume one of those to explain incoherences away.

      So it is not so much a bad intuitive grasp on biology than a suspension of disbelief. The dead coming back is an old, old trope. Ghouls, revenants, draugrs and other gjengangers have been around for a long time, so they won't break suspension of disbelief anymore than vampires or werewolves (even though those obviously don't make any sense on a biological standpoint either).

      Delete
    2. I'd take zombies as avery special case where the objective is very well defined (shambling hordes of the undead) but modern rationalist has forced authors to take greater and greater interest in the How and Why of zombies appearing.

      For most of the past century, zombies were a stand-in for large groups of people you don't like. Strong racist connotations aside, it was the idea of an unstoppable group of enemies that authors wanted to put across. How they got there was irrelevant.

      Recently, it seems that the method of creating a zombie is the main concern of the author. Nanotechnology, biotechnology, brainwaves, ... all reveal a certain fear of one aspect or another of the technology we use today or expect to arrive soon. The surviving characters' Holy Grail is the Cure, which is technology in the service of good.

      As for suspension of disbelief, it is a millennial component of our culture. A writer always tries to reduce his reliance on it, and more educated audiences have trouble following story that breaks basic rules they learnt in primary school. Hence, the more and more detailed explanations for the origins of zombies. As the audience becomes more and more educated, the zombie stories will match their expectations.

      Delete
    3. https://www.iflscience.com/plants-and-animals/mindcontrolling-zombie-ant-fungus-creepier-thought/

      Here's another explaination for Zombies i quite like

      Delete
  5. I realise I'm being a bit scatter-brained here, so let me collect my train of thought.

    My take on nanotech is that it's one of those ideas which sits in a sweet spot for mcguffinite:
    - It has a cool name and plausible link to cutting-edge technology.
    - The technology isn't that common or well-developed, so one can ascribe all sorts of fantastical things to it without worrying about contrary experience on the part of the audience.
    - It slots nicely into pre-existing weaknesses that people have in terms of understanding changes of scale, thermodynamics and so on. This means that you can, for instance, very easily depict little robots rapidly doing stuff that would realistically be done by cell analogues operating quite slowly.

    The last part is what interests me, because people have a lot of the same issues in intuitively understanding biology (and for a lot of the same reasons). The result is that all sorts of pretty ridiculous things slide under the radar in terms of suspension of disbelief (eg: zombies). Which, in turn, is part of what makes the concept of suspension of disbelief fascinating - it provides an interesting window into the capabilities and limitations of human cognition.

    ReplyDelete
    Replies
    1. McGuffinite is usually thought of as a tremendously valuable item, resource or effect that can only be obtained in space. It is the key to motivating humans to start a space industry.

      Nanotechnology is actually more easily made on Earth than in space, so it won't work as a McGuffinite. Maybe the word you are looking for is Cornucopia, a machine that single-handedly brings about the singularity by offering everything for free, or Deus Ex Machina, a technology so important and influential that it takes the place of a god.

      I think audiences are becoming more and more educated. 'Simple' knowledge on why a kettle boils over, why you have to wash your hands after going to the toilet or why planes have pointy noses are actually the rest of decades of scientific teaching that permeates all of society.

      Nanotechnology will be one of the things that succumbs to real world knowledge, just like computers and cars. We'll use them every day, and often have to open up the hood to see what's wrong. Naturally, we'll start intuitively knowing what they are capable of, how they go wrong and how to fix them. They'll become a part of our lives,a dn not so special anymore.

      Delete
    2. I'm thinking of a McGuffin in the original sense, where 'nanotech' becomes convenient shorthand for 'move the plot along'.

      A more correct word, however, would be 'phlebotinum'. Cornicpia and Deus Ex Machina are good as well, though :)

      I agree with your last points, which is why I think that nanotech, like space travel/warfare/whatever will be depicted more realistically over time.

      Delete
    3. You've got me thinking: What does the general population understand more over time?

      More specifically, I came up with a bunch of counter-examples to your point. We know how planes work, and most people on the street can tell you want a jet engine does or what flaps are for, but something ancient and ubiquitous, like medicine, is a mystery to most of us!

      We know about evolution, geometry, formatting, the water cycle, the Sun's future growth and so on, but what about programming language? It's older than exobiology, but people know about the importance of liquid water on other planets better than the HTML code on the web pages they browse every day.

      I think it's more related to the investment required in understanding something. Some concepts can be understood through analogy, or superficially, and still be accurate. Other require preceding knowledge, or are very dissimilar from the rest, and therefore require more investment in time or effort to understand.

      What do you think of this?

      Delete
    4. Following my first post, I agree with you on this - in that I think people have certain blind spots and limitations to their cognition that show up when looking at these sorts of questions.

      I do, however, think that we have enough cognitive flexibility to adapt our thinking in light of experience.


      Here, it is interesting to compare people's ideas of what constitutes an 'easy' or 'hard' task for AI, and how these have changed over time. Amongst AI researchers, there was this initial idea that pathfinding was a trivial problem (expected to be solved in a summer), while something like chess would be a hard problem for future researchers to tackle. But (bitter) experience taught them that this was not true. More, it (should have) taught them that people are inherently bad at estimating what tasks are difficult for an AI to perform.

      Now compare the researcher's views to those of the general public. Chess is still seen as an intellectually intensive process, while walking and talking are seen as trivial. As such, a robot that is shown walking and talking naturally isn't coded as somehow special or marvellous, while one which is shown playing chess or drawing a picture almost certainly is. And this holds true for nearly all media and cultures in which robots are depicted.

      So people have a cognitive blind spot in terms of assigning degrees of inherent difficulty to mental tasks. This can be partially overcome by prior experience, but crops up in a universal enough fashion to be a sign of a general bias in understanding.

      Delete
    5. Good points. I think the dissonance you mentioned is rooted in our personification of the humanoid robots: if it's easy for us, surely it must be easy for them.

      Anyways, I enjoyed this discussion and I hope we have many more in the future.

      Delete
    6. Me too.

      Give a shout if you want to talk about anything biology or law-related, as those are my areas of technical expertise.

      Delete
    7. Great! You'll have fun then with an upcoming post. Tentative title: Technologic enhancements vs Genetic Engineering. Preliminary answer: neither.

      Delete
  6. Great article.
    I've thought that Dentistry would could use the technology. One goes to the Dentist. The Dentist places a tray on the bottom and top teeth. The tray provides the protected environment, where nanomachines clean the teeth and gums. Cavities are replaced with material similar to Dentine and Enamel. Cracked teeth might even be saved.

    ReplyDelete
    Replies
    1. Ooh, quite the bright idea.

      Let's add another step: after the bulk debris is removed by brushing, and bacteria killed with ultraviolet, nanomachines go in and repair holes and cracks. THEN, they add a very fine layers of protective enamel, allowing artificial restoration of the teeth.

      I'm sure that nanotechnology has literal millions of applications in medicine.

      Delete
  7. Sorry to necro this thread, but I just found this blog via Atomic Rockets! If you want a relatively well worked example of a nanomachine, there's an excellent concept paper on respirocytes (artificial red blood cells, link below). The paper proposes solutions to many of the issues given, most of which rely on the fact that the human blood stream is already optimized for nanobots. https://www.foresight.org/Nanomedicine/Respirocytes.html

    ReplyDelete
    Replies
    1. No necro on this blog, all discussions are active!

      That was a fascinating read, thanks. I'm personally of the opinion that genetically engineered retroviruses are technically already medical nanotechnology, and that developing the field of bio-inspired nanomachines will create useful results earlier than assuming atomically perfect structures.

      Delete
    2. Oh certainly! Biotech, especially molecular/bacterial biotech is already changing medicine, chemical engineering, etc.

      Delete
    3. Quite true. If Elon Musk's Neuralink continues development, we'll have nanotechnology changing our entire relationship with machines.

      Smartphones have changed the way our mind works. (https://www.sciencenews.org/article/smartphones-may-be-changing-way-we-think). Imagine how much change a direct connection between our brains and electronics will bring!

      Delete
    4. Given the state of the art in cybersecurity, the first change will be hackers and government/criminal entities having direct access to your brain.

      Delete
  8. What companies are still penny stocks with Great potential to soar this year ? New in the market :-)

    ReplyDelete
  9. The nanotechnology is a truly interesting topic. You can study many hours and not get it right. We produce nanomaterials and have a lot of study hours on this. Yet I can find this article good and educational.

    ReplyDelete
  10. What do you think about the graphene? I think it is an interesting nanomaterial that could be used in many ways. I hope this nanomaterial will get used more and more.

    ReplyDelete
    Replies
    1. Hi Carl. If we solve the current problems of production rate, cost and quality for graphene, it will see more use.

      Delete