Micromouse, a Robotic Maze-Solving Competition

Micromouse is an event where small robotic “mice” race to get to the end of a maze as quickly as possible, after having explored it beforehand. The winner this year finished in less than 4 seconds; you’ll have to see it to see just how incredible it is: 

Below is a video of the exploratory phase, which to me is even more remarkable. The robot figures out its route pretty quickly, in less than two minutes. If it had good cameras on its sides so it could check dead-ends without running up to them, it would probably blaze right through. 

I imagine the technology and techniques used here are relevant for robots that will navigate other environments – the Roomba comes to mind, but I’m sure there are more important industrial applications.


The March of Contact Lens Computers

Earlier on I briefly touched on what I imagined future personal computers might, in an awesome world, be like: contact lenses with light displays and tiny cameras for tracking your interaction with the displayed augmented reality. A new study has fed my fantasy (via BBC):

A new generation of contact lenses that project images in front of the eyes is a step closer after successful animal trials, say scientists.

The technology could allow wearers to read floating texts and emails or augment their sight with computer-generated images, Terminator-syle…

Currently, their crude prototype device can only work if it is within centimetres of the wireless battery.

And its microcircuitry is only enough for one light-emitting diode, reports the Journal of Micromechanics and Microengineering.

But now that initial safety tests in rabbits have gone well, with no obvious adverse effects, the researchers have renewed faith about the device’s possibilities.

They envisage hundreds more pixels could be embedded in the flexible lens to produce complex holographic images.

For example, drivers could wear them to see journey directions or their vehicle’s speed projected onto the windscreen.

Similarly, the lenses could take the virtual world of video gaming to a new level.

They could also provide up-to-date medical information like blood sugar levels by linking to biosensors in the wearer’s body.

Man, those are some tiny paragraphs. This is obviously a very, very early stage in this technology, and there could be any number of issues that prevent it from being feasible. If it is feasible though, I think it’s hard to overstate how revolutionary it would be, and I think this BBC article does indeed understate it. Who cares about taking the world of video gaming to a new level, when it could take the world of living to a new level?

There are already smartphone apps with augmented reality that, when you point the camera at a restaurant, for example, will overlay on the image reviews for that restaurant. There’s also an app that can translate signs in real time on your phone’s display:

I don’t even have a smartphone, so I assume there are plenty of other examples of augmented reality apps. Imagine everyone saw the world through services like this, all the time? I can’t imagine it would be long before we could combine social networking and face recognition to do the same thing with people. Look at a friend, and their last ten status updates pop up, or online articles they’ve read lately, so you know what to talk about. Look at a new acquaintance and you can get their relevant information displayed immediately. Even without the process being interactive, this would dramatically change how we interact with the world in a way that I can’t fully imagine now. 

Okay, that’s enough sci-fi gushing for me today. If you have other ideas on the future of personal computers, I’d love to hear about them. 

Exoskeletons as Fashion

There’s an interesting article over at the new Discover Magazine blog, the Crux, by Kyle Munkittrick, which you should definitely go read if you’re interested since I’ll just touch on it here. He discusses powered exoskeletons as a coming fashion trend – something I had not at all envisioned. He points out that every fashion is a prosthesis, and most relevantly, glasses and contacts were something first seen only as an aid for a disability, but have now blown up to be very much a part of fashion. Why shouldn’t we adapt to exoskeletons as fashion as well?

Right now powered exoskeletons seem to be mainly considered as aids for the handicapped or for the military, but as they get cheaper and better it’s not hard to imagine them being used in everyday life – there’s plenty of work that requires extra safety or heavy lifting. Hard hats haven’t exactly become sexy, but if it’s something used in a wide variety of circumstances by a variety of demographic groups, and considering there are already a fair amount of companies competing over exoskeletons, it would make sense for companies to try to market their products as fashionably as possible for each niche. 

If you’re wondering about the current state of exoskeletons, below are some examples of modern powered suits in action. Many companies are developing them, but the technology doesn’t seem to be quite developed enough for them to be widespread. Regardless, the exoskeleton seen in the first video (HAL) is being used in over 100 hospitals, according to the Tokyo Times, and the exoskeleton from the second video is being sold for personal use in New Zealand, although it’s currently rather pricey at about $150,000 USD.

Fun fact: the first ever powered exoskeleton was developed by GE and the U.S. military in the 1960’s; it was strong, but too heavy and too difficult to control, so it was never even tested with a person inside. 

And on a side note, the Japanese company that’s working on HAL is called Cyberdyne. They named themselves after the company that created Skynet in the Terminator series, and their exoskeleton has the same name as the homicidal AI from 2001: A Space Odyssey? If they’re trying to tell us something about their future plans, they couldn’t put it any clearer. 

Robot Can Control a Human Arm

Using electrodes on a human test subject’s arm, a robot could manipulate the human arm as well as its own arms to coordinate an action between them. This is relevant to the pursuit of robots that can assist paralyzed individuals, by using the robot body in addition to helping the paralyzed person move their own limbs. Below is a video showing this robot in action:


From Automaton:

The robot controls the human limb by sending small electrical currents to electrodes taped to the person’s forearm and biceps, which allows it to command the elbow and hand to move. In the experiment, the person holds a ball, and the robot a hoop; the robot, a small humanoid, has to coordinate the movement of both arms to successfully drop the ball through the hoop…

“Imagine a robot that brings a glass of water to a person with limited movements,” says Bruno Vilhena Adorno, the study’s lead researcher. “From a medical point of view, you might want to encourage the person to move more, and that’s when the robot can help, by moving the person’s arm to reach and hold the glass.”

Another advantage, he adds, is that capable robotic arms are still big, heavy, and expensive. By relying on a person’s physical abilities, robotic arms designed to assist people can have their complexity and cost reduced. Many research teams are teaching robots how to perform bimanual manipulations, and Adorno says it seemed like a natural step to bring human arms into the mix…

The researchers emphasize that the control of the human arm doesn’t have to be precise, just “good enough” to place it inside the robot’s workspace. They claim that having a robot able to control a person’s arm is better than having a very dexterous robot and a person’s weak, unsteady limb…

He plans to continue the project and adds that they’re now improving the electrical stimulation. They’re now able to move the elbow in both directions, for example. Eventually they hope to move the arm to any point in space.

The basic idea, then, is that it’s difficult to provide assistance to people if they can’t effectively use their own limbs, so why not have their helper robot move their limbs for them? 

I know you’re thinking what I’m thinking: terrifying. Besides that, it should be noted that neurons that don’t get any stimulation for a while can end up dying off, so some paralyzed individuals may not have the option of just getting outside stimulation for their nerves, since they won’t be intact any more. I imagine this solution, activating neurons from the outside, might head that degeneration off if it’s used not too long after the paralyzing event. 

The Telesar V Robot Avatar

Wired has an article about a robot that can be communicated with like an avatar – it mimics a user’s movements and transmits visual, auditory and even sensory information back to the user. Here’s a video of this robot in action:

From Wired:

The Telesar V can deliver a remote experience straight to its operator, transmitting sight, sound and touch data using a series of sensors and a 3D head-mounted display. The robot’s operator wears a 3D display helmet, which relays the robot’s entire field of view. A set of headphones transmit what the robot can hear…

With the Telesar V robot, for instance, you can actually feel the shape and temperature of objects, as well as surface unevenness like that of the bumps on the tops of LEGO blocks…

Some nifty telepresence robots — similar to telexistence, but less immersive — are already available in the U.S. The Anybots’ QB Robot has a webcam in its “head,” relaying visual information to its operator while displaying an image of the person at the helm on a small display underneath the camera. Almost Segway-esque in appearance, the QB is a two-wheeled apparatus controlled remotely via desktop. Though at $15,000 a pop, it’s designed more for corporations who need to check in on remote offices than the average consumer.

As far as movement goes, the Telesar V has 17 degrees of freedom in the body, 8 in the head and 7 in the arm joints (which is the same as a human). The hands have 15 degrees of freedom, which is a good amount less than the roughly 30 degrees of freedom a normal human hand has (and some other robotic hands emulate), but enough to allow the robot to easily manipulate objects.

How useful will these robots be in the future? It’s hard to say. I’m sure there are people who work in dangerous conditions who’d much rather be in a control room – we already see this with Predator and Reaper drones replacing piloted aircraft. Maybe once the technology is affordable, we’ll see robots replacing humans for things like bomb disposal and hazardous chemical jobs as well?

Another possibility mentioned in the article is space exploration, although I question how much a humanoid robot could get done in space. Maybe at some point it’ll be possible to have remote labs on Mars or the moon operated by robot avatars? I think that level of sophistication would probably be overkill, but who knows? Trying to accurately predict the future, especially the march of technology, is a great way to feel really dumb.

An Explosive Material Based on Nanoparticles and DNA

Researchers have created an explosive composite material using nanoparticles and DNA. Aluminum and copper oxide put together are known to produce energy, but now by using nanoparticles of them their surface area can be increased, and by using DNA to link them together they can be made to self-assemble. DNA exists in organisms in two complementary strands tightly stuck together; these researchers took advantage of this by grafting individual DNA strands onto nanoparticles, mixing them up and letting the complementary DNA strands stick together.

From PhysOrg:

As a result, the complementary strands on each type of nanoparticle bind, turning the original aluminium and copper oxide powder into a compact, solid material which spontaneously ignites when heated to 410 °C (one of the lowest spontaneous ignition temperatures hitherto described in the literature).

If I’m not mistaken, spontaneous ignition here just means that it begins burning (combustion) without being “lit” by a flame or a spark. 

In addition to its low ignition temperature, this composite also offers the advantage of having a high energy density, similar to nitroglycerine: for the same quantity of material, it produces considerably more heat than aluminium and copper oxide taken separately, where a significant part of the energy is not released. In contrast, by using nanoparticles, with their large active surfaces, the researchers were able to approach the maximum theoretical energy for this exothermic chemical reaction.

The high energy density of this composite makes it an ideal fuel for nanosatellites, which weigh a handful of kilograms and are increasingly used. Such satellites are too light to be equipped with a conventional propulsion system once in orbit. However, a few hundred grams of this composite would give them sufficient energy to adjust their trajectory and orientation.

The composite could also have a host of terrestrial applications: ignitors for gas in internal combustion engines or for fuel in aircraft and rocket nozzles, miniature detonators, on-site welding tools, etc. Once its heat is turned into electrical energy, the composite could also be used as a back-up source for microsystems (such as pollution detectors scattered through the environment).

This article really caught my attention due, of course, to their use of DNA as a sort of glue. While the article explains why DNA works, and it seems like using DNA to bind nanoparticles is not a new idea, unfortunately it doesn’t explain why DNA is the best choice in this particular case. Double-stranded DNA normally comes apart at temperatures way below 410°C, which seems like it would be relevant here. 

In trying to find another article to explain the DNA thing, I found the one I just linked above (and here) about using DNA to form crystal lattices out of nanoparticles; you should check it out. Maybe this line of research will open up nanoparticles for wider use and greater self-assembly, which would probably be pretty revolutionary for all of us. 

Nanoparticle Exposure May Not Be An Issue After All

The study of nanoparticles is a growing field relevant to nanotechnology. Nanoparticles – tiny particles of a material, anywhere between 1 and 2500 nanometres in diameter – are particularly interesting because they can have different properties than the same material in larger quantities. They have size-dependent properties, because the proportion of atoms on a particle’s surface is non-negligible compared to the atoms inside the particle, unlike with larger objects. 

For example, from Wikipedia

Nanoparticles often possess unexpected optical properties as they are small enough to confine their electrons and produce quantum effects. For example gold nanoparticles appear deep red to black in solution. Nanoparticles of usually yellow gold and gray silicon are red in color. Gold nanoparticles melt at much lower temperatures (~300 °C for 2.5 nm size) than the gold slabs (1064 °C). And absorption of solar radiation in photovoltaic cells is much higher in materials composed of nanoparticles than it is in thin films of continuous sheets of material.

However, the unusual properties of nanoparticles means, naturally, that they could be harmful to human health in some way. This has been a worry for some time now as nanotechnology has risen to prominence. In what seems like a big relief, a new study shows that we may actually be exposed to nanoparticles all the time, so if they have any dangerous effects, we should already know about them.

From ScienceDaily:

Since the emergence of nanotechnology, researchers, regulators and the public have been concerned that the potential toxicity of nano-sized products might threaten human health by way of environmental exposure.

Now, with the help of high-powered transmission electron microscopes, chemists captured never-before-seen views of miniscule metal nanoparticles naturally being created by silver articles such as wire, jewelry and eating utensils in contact with other surfaces. It turns out, researchers say, nanoparticles have been in contact with humans for a long, long time…

Using a new approach developed at [the University of Oregon] that allows for the direct observation of microscopic changes in nanoparticles over time, researchers found that silver nanoparticles deposited on the surface of their SMART Grids electron microscope slides began to transform in size, shape and particle populations within a few hours, especially when exposed to humid air, water and light. Similar dynamic behavior and new nanoparticle formation was observed when the study was extended to look at macro-sized silver objects such as wire or jewelry.

“Our findings show that nanoparticle ‘size’ may not be static, especially when particles are on surfaces. For this reason, we believe that environmental health and safety concerns should not be defined — or regulated — based upon size,” said James E. Hutchison, who holds the Lokey-Harrington Chair in Chemistry. “In addition, the generation of nanoparticles from objects that humans have contacted for millennia suggests that humans have been exposed to these nanoparticles throughout time. Rather than raise concern, I think this suggests that we would have already linked exposure to these materials to health hazards if there were any.”

Any potential federal regulatory policies, the research team concluded, should allow for the presence of background levels of nanoparticles and their dynamic behavior in the environment.

So that’s good news. Nanotechnologists, nanotechnology away!

%d bloggers like this: