বৃহস্পতিবার, ১৪ জুন, ২০১২

Interconnected robot swarm takes to the skies

The Ecole Polytechnic Federale de Lausanne in Switzerland is developing swarms of flying robots that could be deployed in disaster areas to create communication networks for rescuers. The Swarming Micro Air Vehicle Network (SMAVNET) project comprises of robust, lightweight robots and software that allows the devices to wirelessly communicate with each other.
The flying robots were built out of expanded polypropylene with a single motor at the rear and two elevons (control surfaces that enable steering). The robots are equipped with autopilot to control altitude, airspeed and turn rate. A micro-controller operates using three sensors — a gyroscope and two pressure sensors. The robots also have a GPS module to log flight journeys.
The swarm controllers running Linux are connected to an off-the-shelf USB Wi-Fi dongle. The output of these (the desired turn rate, speed or altitude) is sent to the autopilot.
For the swarming, robots react to wireless communication with either neighbouring robots or rescuers, rather than relying on GPS or other positioning sensors that might be unreliable, impractical or expensive. Software algorithms that know where other nearby bots are can stop them from crashing into each other.
 


/uploadedImages/RD/News/2010/09/RobotSwarm2-large.jpg

Designing swarm controllers is generally quite challenging because there is no clear relationship between the individual robot behaviour and the resultant behaviour of the whole swarm. The researchers therefore looked to biology for the answer.

Network analyzer examines S-parameters without cost of vector-based instruments

LeCroy Corporation, a supplier of oscilloscopes, protocol analyzers and serial data test solutions, today announced the launch of a new class of instrument, the SPARQ series of Signal Integrity Network Analyzers. The SPARQ measures 40 GHz S-parameters on up to 4-ports with single button press operation at a small fraction of the cost of traditional methods such as Vector Network Analyzers. With the low price and ease of use of the SPARQ, multi-port S-parameter measurements are now accessible to a much wider audience.

The SPARQ is a time domain instrument, using TDR/T technology along with patented LeCroy innovations to rapidly acquire waveforms and measure the S-parameters of a device under test. The SPARQ measures both frequency and time domain results, and outputs standard Touchstone S-parameter files that are ready to be loaded into the user’s simulation software. The unit is small, rugged, PC-based and portable, and includes all of the hardware and software tools required by the signal integrity engineer for characterizing passive devices.

The SPARQ calibrates using an OSLT calibration kit that is internal to the unit. This allows the calibration and measurement to proceed automatically with a single button click and without any need to connect and disconnect calibration standards. With the SPARQ, the painstaking, lengthy and error-prone calibration procedure is once and for all eliminated. The “E” model SPARQ units include the internal calibration capability standard, and also support manual calibration using an external calibration kit. Setting up the SPARQ is also fast and easy; all configurations for the S-parameter measurements are contained within a single setup screen.

China takes top spot with 2.5-petaflop supercomputer

The fully operational Tianhe-1A, located at the National Supercomputer Center in Tianjin, has been scored at 2.507 petaflops as measured by the LINPACK benchmark. The result was announced Thursday at HPC 2010 China.

That score moves it past Cray's 2.3-petaflop Jaguar located at Oak Ridge National Lab in Tennessee. The newest Tianhe, which is Chinese for “River in the Sky” or “Milky Way”, achieved the record using 7,168 NVIDIA Tesla M2050 GPUs and 14,336 Intel Xeon CPUs consuming 4.04 megawatts. It operates with 262 TB of main memory and 2 PB of storage and boasts a proprietary interconnect technology developed by China’s National University of Defense Technology. Tianhe-1A epitomizes modern heterogeneous computing by coupling massively parallel GPUs with multi-core CPUs.

According to graphical processing unit manufacturer NVIDIA, which contributed heavily to the project, it would require more than 50,000 CPUs and twice as much floor space to deliver the same performance using CPUs alone. More importantly, a 2.507 petaflop system built entirely with CPUs would consume more than 12 MW. The use of GPUs in a heterogeneous computing environment allows Tianhe-1A to consume only 4.04 MW, making it three times more power efficient. The difference in power consumption is enough to provide electricity to over 5,000 homes for a year.

"The performance and efficiency of Tianhe-1A was simply not possible without GPUs," said Guangming Liu, chief of National Supercomputer Center in Tianjin.

Fusion-io achieves 1 million IOPS from single PCI Express card

At Supercomputing 2010 this week, Fusion-io announced that it has achieved the highest input/output operations per second (IOPS) and bandwidth in the industry. This performance is higher than for any other solid-state or traditional disk-based technology on the market.

Fusion’s ioMemory technology has enabled more than 1 million IOPS from an increasingly dense footprint:
  • In 2008, from a single rack, working together with IBM on Project Quicksilver
  • In 2009, from a single server, working together with HP’s Proliant team
  • Now, in 2010, from a single PCI Express card, the ioDrive Octal again redefines the standard by which all others will be compared


In addition to providing more than 1 million IOPS of performance, each ioDrive Octal provides 6.2 GB/s of bandwidth and up to 5.7 TB of linear-scaling capacity per PCI-Express slot. This allows applications to process tens of terabytes of data without the latency impact of accessing backing data stores.

“Scientists face an overabundance of data in areas such as climatology, cosmology, nanotechnology and defense. Accessing and visualizing these complex data models take an inordinate amount of time,” said David Flynn, CEO of Fusion-io. “Rapid data access enables researchers to quickly and reliably solve problems, and technologies such as those from Fusion-io allow them to analyze much more data faster than ever before.”

Demonstrated last year at SC09, the ioDrive Octal extends Fusion’s ioMemory portfolio and offers customers the highest performance available on the market today. The ioDrive Octal holds eight ioMemory Modules - putting the equivalent capacity, performance and reliability of eight ioDrives into a single card. It fits any PCI Express x16 Gen2 double-wide slot, the same as those used for high-performance graphics cards.

Kinect Controller spies a new wave of hacking

When Oliver Kreylos, a computer scientist, heard about the capabilities of Microsoft’s new Kinect gaming device, he couldn’t wait to get his hands on it. “I dropped everything, rode my bike to the closest game store and bought one,” he said.

But he had no interest in playing video games with the Kinect, which is meant to be plugged into an Xbox and allows players to control the action onscreen by moving their bodies.

Mr. Kreylos, who specializes in virtual reality and 3-D graphics, had just learned that he could download some software and use the device with his computer instead. He was soon using it to create “holographic” video images that can be rotated on a computer screen. A video he posted on YouTube last week caused jaws to drop and has been watched 1.3 million times.

Mr. Kreylos is part of a crowd of programmers, roboticists and tinkerers who are getting the Kinect to do things it was not really meant to do. The attraction of the device is that it is outfitted with cameras, sensors and software that let it detect movement, depth, and the shape and position of the human body.

New algorithm breaks bottlenecks

As sensors that do things like detect touch and motion in cell phones get smaller, cheaper, and more reliable, computer manufacturers are beginning to take seriously the decade-old idea of “smart dust”—networks of tiny wireless devices that permeate the environment, monitoring everything from the structural integrity of buildings and bridges to the activity of live volcanoes. In order for such networks to make collective decisions, however, they need to integrate information gathered by hundreds or thousands of devices.
But networks of cheap sensors scattered in punishing and protean environments are prone to “bottlenecks,” regions of sparse connectivity that all transmitted data must pass through in order to reach the whole network. Keren Censor-Hillel, a postdoc at MIT’s Computer Science and Artificial Intelligence Laboratory, and Hadas Shachnai of Technion—Israel Institute of Technology presented a new algorithm that handles bottlenecks much more effectively than its predecessors.
The algorithm is designed to work in so-called ad hoc networks, in which no one device acts as superintendent, overseeing the network as a whole. In a network of cheap wireless sensors, for instance, any given device could fail: its battery could die; its signal could be obstructed; it could even be carried off by a foraging animal. The network has to be able to adjust to any device’s disappearance, which means that no one device can have too much responsibility.

Team achieves world record data transmission speed

Scientists of Karlsruhe Institute of Technology (KIT) have suc-ceeded in encoding data at a rate of 26 terabits per second on a single laser beam, transmitting them over a distance of 50 km, and decoding them successfully. This is the largest data volume ever transported on a laser beam. The process developed by KIT allows to transmit the contents of 700 DVDs in one second only. The journal Nature Photonics reports about this success in its latest issue.
   
With this experiment, the KIT scientists in the team of Professor Jürg Leuthold beat their own record in high-speed data transmission of 2010, when they exceeded the magic limit of 10 terabits per sec-ond, i.e. a data rate of 10,000 billion bits per second. This success of the group is due to a new data decoding process. The opto-electric decoding method is based on initially purely optical calculation at highest data rates in order to break down the high data rate to smaller bit rates that can then be processed electrically. The initially optical reduction of the bit rates is required, as no electronic processing methods are available for a data rate of 26 terabits per second.
   
The team of Leuthold applies the so-called orthogonal frequency division multiplexing (OFDM) for record data encoding. For many years, this process has been used successfully in mobile communi-cations. It is based on mathematical routines (Fast Fourier Trans-formation).

"The challenge was to increase the process speed not only by a factor of 1000, but by a factor of nearly a million for data processing at 26 terabits per second," explains Leuthold who is heading the Institutes of Photonics and Quantum Electronics and Microstructure Technology at KIT. "The decisive innovative idea was optical implementation of the mathematical routine." Calculation in the optical range turned out to be not only extremely fast, but also highly energy-efficient, because energy is required for the laser and a few process steps only.
"Our result shows that physical limits are not yet exceeded even at extremely high data rates", Leuthold says while having in mind the constantly growing data volume on the internet. In the opinion of Leuthold, transmission of 26 terabits per second confirms that even high data rates can be handled today, while energy consumption is minimized.
   

U.S. says no new cybersecurity treaty needed

LONDON (AP) — America's new cyber czar said Wednesday that international law and cooperation — not another treaty — was enough to tackle cybersecurity issues for now.

Christopher Painter, coordinator for cyber issues for the U.S. State Department, declined to comment on a Wall Street Journal report Tuesday that said the Pentagon was considering a policy that could classify some cyberattacks as acts of war. He said the report was based on material that had either not been released or discussed yet.

He did, however, say that U.S. President Barack Obama's recent cybersecurity strategy covered a myriad of different aspects, ranging from international freedoms to governance issues and challenges facing the military.

"We don't need a new treaty," he told The Associated Press as he arrived for an international cybersecurity summit in London. "We need a discussion around the norms that are in cyberspace, what the rules of the road are and we need to build a consensus around those topics."

New cyber attacks are being perfected so quickly that the world needs a nonproliferation treaty to control their creation and use, the chairman of one of the world's largest telecommunications companies said Wednesday.

Michael Rake of BT Group PLC warned that world powers are being drawn into a high-tech arms race, with many already able to fight a war without firing a single shot.

"I don't think personally it's an exaggeration to say now that basically you can bring a state to its knees without any military action whatsoever," Rake said. He said it was "critical to try to move toward some sort of cyber technology nonproliferation treaty."

The suggestion drew a mixed response from cyberwarriors gathered in London for a conference on Internet security, although at least one academic praised it for highlighting the need to subject online interstate attacks to some kind of an international legal framework.

Cyberweapons and cyberwarfare have increasingly preoccupied policymakers as hacks and computer viruses grow in complexity.

Recent high-profile attacks against Sony Corp. and Lockheed Martin Corp. have made headlines, while experts described last year's discovery of the super-sophisticated Stuxnet virus — thought to have been aimed at sabotaging Iran's disputed nuclear program — as an illustration of the havoc that malicious programs can wreak on infrastructure and industry.

"You can close vital systems, energy systems, medical systems," Rake said. "The ability to have significant impact on a state is there."

The threat grows every day. Natalya Kaspersky, co-founder of anti-virus software provider Kaspersky Lab ZAO, said Internet security firms were logging some 70,000 new malicious programs every 24 hours. Shawn Henry, executive assistant director of the FBI, said that last year alone his agency arrested more than 200 cybercriminals.

Tree identification packaged in an app

WASHINGTON (AP) — If you've ever wondered what type of tree was nearby but didn't have a guide book, a new smartphone app allows users with no formal training to satisfy their curiosity and contribute to science at the same time.

Scientists have developed the first mobile app to identify plants by simply photographing a leaf. The free iPhone and iPad app, called Leafsnap, instantly searches a growing library of leaf images amassed by the Smithsonian Institution. In seconds, it returns a likely species name, high-resolution photographs and information on the tree's flowers, fruit, seeds and bark.

Users make the final identification and share their findings with the app's growing database to help map the population of trees one mobile phone at a time.

Leafsnap debuted in May, covering all the trees in New York's Central park and Washington's Rock Creek Park. It has been downloaded more than 150,000 times in the first month, and its creators expect it to continue to grow as it expands to Android phones.

By this summer, it will include all the trees of the Northeast and eventually will cover all the trees of North America.

Smithsonian research botanist John Kress, who created the app with engineers from Columbia University and the University of Maryland, said it was originally conceived in 2003 as a high-tech aid for scientists to discover new species in unknown habitats. The project evolved, though, with the emergence of smartphones to become a new way for citizens to contribute to research.

"This is going to be able to populate a database of every tree in the United States," Kress said. "I mean that's millions and millions and millions of trees, so that would be really neat."

It's also the first real chance for citizens to directly access some of the science based on the nearly 5 million specimens kept by the U.S. National Herbarium at the Smithsonian's National Museum of Natural History. The collection began in 1848 and is among the world's 10 largest plant collections.

Arctic chill brings Facebook data center to Sweden

STOCKHOLM (AP)—Facebook will build a new server farm on the edge of the Arctic Circle—its first outside the U.S.—that will improve performance for European users of the social networking site, officials said Thursday.

After reviewing potential locations across Europe, Facebook confirmed it had picked the northern Swedish city of Lulea for the data center partly because of the cold climate—crucial for keeping the servers cool—and access to renewable energy from nearby hydropower facilities.

The move reflects the growing international presence of the California-based site, which counts 800 million users worldwide.

"Facebook has more users outside the U.S. than inside," Facebook director of site operations Tom Furlong told The Associated Press. "It was time for us to expand in Europe."

He said European users would get better performance from having a node for data traffic closer to them. Facebook currently stores data at sites in California, Virginia and Oregon and is building another facility in North Carolina.

The Lulea data center, which will consist of three 300,000-ft2 (28,000-m2) server buildings, is scheduled for completion by 2014. The site will need 120 MW of energy, fully derived from hydropower.

Located 60 miles (100 km), south of the Arctic Cicle, Lulea lies near hydropower stations on a river that generates twice as much electricity as the Hoover Dam on the border of Nevada and Arizona, Facebook said.

In case of a blackout, construction designs call for each building to have 14 backup diesel generators with a total output of 40 MW.

Nanophotonic LED reshapes on-chip data transmission

A team at Stanford's School of Engineering has demonstrated an ultrafast nanoscale light emitting diode (LED) that is orders of magnitude lower in power consumption than today's laser-based systems and able to transmit data at 10 billion bits per second. The researchers say it is a major step forward in providing a practical ultrafast, low-power light sources for on-chip computer data transmission.
   
Stanford's Jelena Vuckovic, an associate professor of electrical engineering and the study's senior author, and first author Gary Shambat, a doctoral candidate in electrical engineering, announced their device in paper to be published November 15 in the journal Nature Communications.
   
Vuckovic had earlier this year produced a nanoscale laser that was similarly efficient and fast, but that device operated only at temperatures below 150 Kelvin, about -190 F, making them impractical for commercial use. The new device operates at room temperature and could, therefore, represent an important step toward next-generation computer processors.
   
"Low-power, electrically controlled light sources are vital for next generation optical systems to meet the growing energy demands of the computer industry," said Vuckovic. "This moves us in that direction significantly."

High-energy physicists set record for network data transfer

Researchers have set a new world record for data transfer, helping to usher in the next generation of high-speed network technology. At the SuperComputing 2011 (SC11) conference in Seattle during mid-November, the international team transferred data in opposite directions at a combined rate of 186 Gbps in a wide-area network circuit. The rate is equivalent to moving two million gigabytes per day, fast enough to transfer nearly 100,000 full Blu-ray disks—each with a complete movie and all the extras—in a day.
The team of high-energy physicists, computer scientists, and network engineers was led by the California Institute of Technology (Caltech), the University of Victoria, the University of Michigan, the European Center for Nuclear Research (CERN), Florida International University, and other partners.
According to the researchers, the achievement will help establish new ways to transport the increasingly large quantities of data that traverse continents and oceans via global networks of optical fibers. These new methods are needed for the next generation of network technology—which allows transfer rates of 40 and 100 Gbps—that will be built in the next couple of years.
"Our group and its partners are showing how massive amounts of data will be handled and transported in the future," says Harvey Newman, professor of physics and head of the high-energy physics (HEP) team. "Having these tools in our hands allows us to engage in realizable visions others do not have. We can see a clear path to a future others cannot yet imagine with any confidence."

Chips as mini Internets

Computer chips have stopped getting faster. In order to keep increasing chips' computational power at the rate to which we've grown accustomed, chipmakers are instead giving them additional "cores," or processing units.
Today, a typical chip might have six or eight cores, all communicating with each other over a single bundle of wires, called a bus. With a bus, however, only one pair of cores can talk at a time, which would be a serious limitation in chips with hundreds or even thousands of cores, which many electrical engineers envision as the future of computing.
Li-Shiuan Peh, an associate professor of electrical engineering and computer science at Massachusetts Institute of Technology (MIT), wants cores to communicate the same way computers hooked to the Internet do: By bundling the information they transmit into "packets." Each core would have its own router, which could send a packet down any of several paths, depending on the condition of the network as a whole.
At the Design Automation Conference, Peh and her colleagues will present a paper she describes as "summarizing 10 years of research" on such "networks on chip." Not only do the researchers establish theoretical limits on the efficiency of packet-switched on-chip communication networks, but they also present measurements performed on a test chip in which they came very close to reaching several of those limits.

Computing the best high-resolution 3D tissue images

Real-time, 3D microscopic tissue imaging could be a revolution for medical fields such as cancer diagnosis, minimally invasive surgery, and ophthalmology. University of Illinois researchers have developed a technique to computationally correct for aberrations in optical tomography, bringing the future of medical imaging into focus.
The computational technique could provide faster, less-expensive, and higher-resolution tissue imaging to a broader population of users. The group describes its technique in an online early edition of the Proceedings of the National Academy of Sciences.
"Computational techniques allow you to go beyond what the optical system can do alone, to ultimately get the best quality images and 3D datasets," said Steven Adie, a postdoctoral researcher at the Beckman Institute for Advanced Science and Technology at the U. of I. "This would be very useful for real-time imaging applications such as image-guided surgery."
Aberrations, such as astigmatism or distortion, plague high-resolution imaging. They make objects that should look like fine points appear to be blobs or streaks. The higher the resolution, the worse the problem becomes. It's especially tricky in tissue imaging, when precision is vital to a correct diagnosis.
Adaptive optics can correct aberrations in imaging. It's widely used in astronomy to correct for distortion as starlight filters through the atmosphere. A complex system of mirrors smooth out the scattered light before it enters the lens. Medical scientists have begun applying adaptive optics hardware to microscopes, hoping to improve cell and tissue imaging.

Swiss scientists demonstrate mind-controlled robot

Lausanne, Switzerland (AP)—Swiss scientists have demonstrated how a partially paralyzed person can control a robot by thought alone, a step they hope will one day allow immobile people to interact with their surroundings through so-called avatars.

Similar experiments have taken place in the United States and Germany, but they involved either able-bodied patients or invasive brain implants.

On Tuesday, a team at Switzerland's Federal Institute of Technology in Lausanne used only a simple head cap to record the brain signals of Mark-Andre Duc, who was at a hospital in the southern Swiss town of Sion 100 km (62 miles) away.

Duc's thoughts—or rather, the electrical signals emitted by his brain when he imagined lifting his paralyzed fingers—were decoded almost instantly by a laptop at the hospital. The resulting instructions—left or right—were then transmitted to a foot-tall robot scooting around the Lausanne lab.

Duc lost control of his legs and fingers in a fall and is now considered partially quadriplegic. He said controlling the robot wasn't hard on a good day.

"But when I'm in pain it becomes more difficult," he told The Associated Press through a video link screen on a second laptop attached to the robot.

Background noise caused by pain or even a wandering mind has emerged as a major challenge in the research of so-called brain-computer interfaces since they first began to be tested on humans more than a decade ago, said Jose Millan, who led the Swiss team.

Optical clock signal transmission helps redefine time

Atomic clocks based on the oscillations of a cesium atom keep amazingly steady time and also define the precise length of a second. But cesium clocks are no longer the most accurate. That title has been transferred to an optical clock housed at the U.S. National Institute of Standards and Technology (NIST) in Boulder, Colo. that can keep time to within 1 second in 3.7 billion years. Before this newfound precision can redefine the second, or lead to new applications like ultra-precise navigation, the system used to communicate time around the globe will need an upgrade. Recently scientists from the Max Planck Institute of Quantum Optics, in the south of Germany, and the Federal Institute of Physical and Technical Affairs in the north have taken a first step along that path, successfully sending a highly accurate clock signal across the many hundreds of kilometers of countryside that separate their two institutions.
   
The researchers will present their finding at Conference on Lasers and Electro Optics (CLEO: 2012), taking place May 6-11 in San Jose, Calif.
   
"Over the last decade a new kind of frequency standard has been developed that is based on optical transitions, the so-called optical clock," says Stefan Droste, a researcher at the Max Planck Institute of Quantum Optics. The NIST optical clock, for example, is more than one hundred times more accurate than the cesium clock that serves as the United States' primary time standard.

New software matches more kidney donations, faster

Jack Burns and his wife, Adele, welcomed Doug Robertson with open arms. It was a very special reunion!

"I didn't know whether I was ever going to meet my recipient and I was just thrilled that we could get together," said Doug, who had traveled from his home in Portsmouth, N.H., to meet Jack and his wife. Doug came into Jack and Adele's lives in 2010 when Jack, who has diabetes and high blood pressure, needed a new kidney. Adele wanted to give him one of her own. "I wanted to have my husband around and I knew that we didn't have a lot of options," says Adele.

Jack gets choked up thinking about what his wife sacrificed. "I was grateful to have someone who loved me that much." But, Adele was not a good medical match to her husband. So, they joined a live-donor kidney exchange program. She donated one of her kidneys to a suitable recipient and Jack got a kidney from Doug. It can be much quicker than getting an organ from a deceased donor.

"The deceased donor wait list can be very long for people," explains Ruthanne Hanto, director of the Organ Procurement and Transplantation Network (OPTN) Kidney Paired Donation pilot program, which is operated under the United Network for Organ Sharing (UNOS). "If somebody brings a living donor with them, then they have a great chance of getting transplanted sooner."

Smart sensor could lead to flying 3D eye-bots

Like a well-rehearsed formation team, a flock of flying robots rises slowly into the air with a loud buzzing noise. A good two dozen in number, they perform an intricate dance in the sky above the seething hordes of soccer fans. Rowdy hooligans have stormed the field and set off flares. Fights are breaking out all over, smoke is hindering visibility, and chaos is the order of the day. Only the swarm of flying drones can maintain an overview of the situation.

These unmanned aerial vehicles (UAVs) are a kind of mini-helicopter, with a wingspan of around 2 m. They have a propeller on each of their two variable-geometry side wings, which lends them rapid and precise maneuverability. In operation over the playing field, their cameras and sensors capture urgently-needed images and data, and transmit them to the control center. Where are the most seriously injured people? What’s the best way to separate the rival gangs? The information provided by the drones allows the head of operations to make important decions more quickly, while the robots form up to go about their business above the arena autonomously—and without ever colliding with each other, or with any other obstacles.

Objects that know when they are touched

A doorknob that knows whether to lock or unlock based on how it is grasped, a smartphone that silences itself if the user holds a finger to her lips and a chair that adjusts room lighting based on recognizing if a user is reclining or leaning forward are among the many possible applications of Touché, a new sensing technique developed by a team at Disney Research, Pittsburgh, and Carnegie Mellon University.

Touché is a form of capacitive touch sensing, the same principle underlying the types of touchscreens used in most smartphones. But instead of sensing electrical signals at a single frequency, like the typical touchscreen, Touché monitors capacitive signals across a broad range of frequencies.

This Swept Frequency Capacitive Sensing (SFCS) makes it possible to not only detect a "touch event," but to recognize complex configurations of the hand or body that is doing the touching. An object thus could sense how it is being touched, or might sense the body configuration of the person doing the touching.

Robots that reveal the inner workings of brain cells

Gaining access to the inner workings of a neuron in the living brain offers a wealth of useful information: Its patterns of electrical activity, its shape, even a profile of which genes are turned on at a given moment. However, achieving this entry is such a painstaking task that it is considered an art form; it is so difficult to learn that only a small number of laboratories in the world practice it.
But that could soon change: Researchers at Massachusetts Institute of Technology (MIT) and Georgia Institute of Technology (Georgia Tech) have developed a way to automate the process of finding and recording information from neurons in the living brain. The researchers have shown that a robotic arm guided by a cell-detecting computer algorithm can identify and record from neurons in the living mouse brain with better accuracy and speed than a human experimenter.
The new automated process eliminates the need for months of training and provides long-sought information about living cells' activities. Using this technique, scientists could classify the thousands of different types of cells in the brain, map how they connect to each other, and figure out how diseased cells differ from normal cells.

Honeycomb of magnets could usher in new type of computing

Scientists have taken an important step forward in developing a new material using nano-sized magnets that could ultimately lead to new types of electronic devices, with greater processing capacity than is currently feasible, in a study published recently in the journal Science.
           
Many modern data storage devices, like hard disk drives, rely on the ability to manipulate the properties of tiny individual magnetic sections, but their overall design is limited by the way these magnetic 'domains' interact when they are close together.

Now, researchers from Imperial College London have demonstrated that a honeycomb pattern of nano-sized magnets, in a material known as spin ice, introduces competition between neighbouring magnets, and reduces the problems caused by these interactions by two-thirds. They have shown that large arrays of these nano-magnets can be used to store computable information. The arrays can then be read by measuring their electrical resistance.

Spin spirals could help miniaturization of computers

How can computer data be reliably stored and read out in future when computers are getting smaller and smaller? Scientists from Jülich, Hamburg and Kiel propose to make use of magnetic moments in chains of iron atoms. This would allow information to be transported on the nanoscale in a fast and energy-efficient manner over a wide temperature range, while remaining largely unaffected by external magnetic fields. The researchers have demonstrated this in both theory and experiment. Their work could pave the way for further miniaturization in information processing. The results have been published in the latest edition of the international scientific journal Physical Review Letters together with a recommendation by the editor and an additional commentary.

Up to now, computers have saved data in magnetic domains (“bits”) on the hard drive. These domains are already inconceivably small by human standards: a single 1 terabyte hard drive contains around eight billion bits. However, in order to make new functionalities possible, computer components will have to “shrink” even more in future. However, when the bits lie too close together, their magnetic fields overlap, making the writing and reading of data impossible. For this reason, new concepts are required. One method of transporting data on a nanometre scale was suggested recently by scientists at Forschungszentrum Jülich and the universities of Hamburg and Kiel.

Paralyzed woman uses her mind to control robot arm

NEW YORK (AP)—Using only her thoughts, a Massachusetts woman paralyzed for 15 years directed a robotic arm to pick up a bottle of coffee and bring it to her lips, researchers report in the latest advance in harnessing brain waves to help disabled people.

In the past year, similar stories have included a quadriplegic man in Pennsylvania who made a robotic arm give a high-five and stroke his girlfriend's hand, and a partially paralyzed man who remotely controlled a small robot that scooted around in a Swiss lab.

It's startling stuff. But will the experimental brain-controlled technology ever help paralyzed people in everyday life?

Experts in the technology and in rehabilitation medicine say they are optimistic that it will, once technology improves and the cost comes down.

The latest report, which was published online Wednesday in the journal Nature, comes from scientists at Brown University, the Providence VA Medical Center in Rhode Island, Harvard Medical School and elsewhere.

Computing experts unveil superefficient “inexact” chip

Researchers have unveiled an “inexact” computer chip that challenges the industry’s 50-year pursuit of accuracy. The design improves power and resource efficiency by allowing for occasional errors. Prototypes unveiled this week at the ACM International Conference on Computing Frontiers in Cagliari, Italy, are at least 15 times more efficient than today’s technology.

The research, which earned best-paper honors at the conference, was conducted by experts from Rice University in Houston, Singapore’s Nanyang Technological University (NTU), Switzerland’s Center for Electronics and Microtechnology (CSEM) and the University of California, Berkeley.

In terms of speed, energy consumption and size, inexact computer chips like this prototype, are about 15 times more efficient than today's microchips.

How ion bombardment reshapes metal surfaces

To modify a metal surface at the scale of atoms and molecules—for instance to refine the wiring in computer chips or the reflective silver in optical components—manufacturers shower it with ions. While the process may seem high-tech and precise, the technique has been limited by the lack of understanding of the underlying physics. In a new study, Brown University engineers modeled noble gas ion bombardments with unprecedented richness, providing long-sought insights into how it works.
"Surface patterns and stresses caused by ion beam bombardments have been extensively studied experimentally but could not be predicted accurately so far," said Kyung-Suk Kim, professor of engineering at Brown and coauthor of the study published in the Proceedings of the Royal Society A. "The new discovery is expected to provide predictive design capability for controlling the surface patterns and stresses in nanotechnology products."

Malware intelligence system enables organizations to share threat information

As malware threats expand into new domains and increasingly focus on industrial espionage, Georgia Institute of Technology researchers are launching a new weapon to help battle the threats: A malware intelligence system that will help corporate and government security officials share information about the attacks they are fighting.
Known as Titan, the system will be at the center of a security community that will help create safety in numbers as companies large and small add their threat data to a knowledge base that will be shared with all participants. Operated by security specialists at the Georgia Tech Research Institute (GTRI), the system builds on a threat analysis foundation—including a malware repository that analyzes and classifies an average of 100,000 pieces of malicious code each day.

Scientists hit major milestone in whole-brain circuit mapping project

Neurosciences at Cold Spring Harbor Laboratory (CSHL) reached an important milestone today, publicly releasing the first installment out of 500 TB of data so far collected in their path breaking project to construct the first whole-brain wiring diagram of a vertebrate brain, that of the mouse.
   
The data consist of gigapixel images (each close to 1 billion pixels) of whole-brain sections that can be zoomed to show individual neurons and their processes, providing a "virtual microscope." The images are integrated with other data sources from the web, and are being made fully accessible to neurosciences as well as interested members of the general public (http://mouse.brainarchitecture.org). The data are being released per-publication in the spirit of open science initiatives that have become familiar in digital astronomy (e.g., Sloan Digital Sky Survey) but are not yet as widespread in neurobiology.
   

Spin currents found in topological insulators

Physicists in Germany have recently provided new insights into spintronics: In ultra-thin topological insulators, they have identified spin-polarized currents, which were first theoretically predicted six years ago. They have also presenteda method of application for the development of new computers.

First “map” of the bacterial make-up of humans published

The landmark publication this week of a “map” of the bacterial make-up of healthy humans required the work of 200 scientists, who made sense of more than 5,000 samples of human and bacterial DNA and 3.5 terabases of genomic data. The map should help us define and track the microbiome.

Make your Computer Welcome You !!!!!!!!!!!!!!!!!!!!


With this trick, you can make your Computer welcome you in its computerized voice instead of having a human said Welcome. You can make your Windows based computer say "Welcome to your PC, Username."

Make Windows Greet you with a Custom Voice Message at Startup

To use this trick, follow the instructions given below:-

1.Click on Start. Navigate to All Programs, Accessories and Notepad.
2.Copy and paste the exact code given below.
Dim speaks, speech
speaks="Welcome to your PC, Username"
Set speech=CreateObject("sapi.spvoice")
speech.Speak speaks
3. Replace Username with your own name.
4. Click on File Menu, Save As, select All Types in Save as Type option, and save the file as Welcome.vbs or "*.vbs".
5. Copy the saved file.
6. Navigate to C:\Documents and Settings\All Users\Start Menu\Programs\Startup(in Windows XP) or to C:\Users\ User-Name\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup (in Windows 7 and Windows Vista) if C: is your System drive.
7. Paste the file.



Now when the next time you start your computer, Windows will welcome you in its own computerized voice.

Note: For best results, it is recommended to change sound scheme to No Sounds.
You can change the sound scheme to No Sounds by following the steps given below:-
1.Go to Control Panel.
2.Then click on Switch to Classic View.
3.Then Click on Sounds and Audio Devices.
4.Then Click on the Sounds Tab.
5.Select No Sounds from the Sound Scheme option.
6.If you wish to save your Previous Sound Scheme, you can save it by clicking Yesin the popup menu.
7.Click on OK.



Try it yourself to see how it works. In my personal opinion, this is an excellent trick. Whenever I start my PC in front of anybody and the PC welcomes me, the fellow is left wondering how brilliant a computer do I have.