Earth to Mars in 100 days: The power of nuclear rockets Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Humanity has long since established a foothold in the Artic and Antarctic, but extensive colonization of these regions may soon become economically viable. If we can learn to build self-sufficient habitats in these extreme environments, similar technology could be used to live on the Moon or Mars. Citation: Inflatable Habitats for Polar and Space Colonists (2007, January 29) retrieved 18 August 2019 from https://phys.org/news/2007-01-inflatable-habitats-polar-space-colonists.html Inflatable dome for cold, high-latitude regions on Earth. The main figure (a) shows a cross-section of the suggested biosphere, and the small figure (b) shows a top-down view. The important components are labeled: a thin, transparent double film on the sunlit side (1), a reflective cover on the shaded side (2), control louvers (3), the entrance (5), and an air pump/ventilator (6). The direction of the Sun is indicated by beams of light (4). The average temperature of the Antarctic coast in winter is about –20 °C. As if this weren’t enough, the region suffers from heavy snowfall, strong winds, and six-month nights. How can humanity possibly survive in such a hostile environment? So far we seem to have managed well; Antarctica has almost forty permanently staffed research stations (with several more scheduled to open by 2008). These installations are far from self-sufficient, however; the USA alone spent 125 million dollars in 1995 on maintenance and operations. All vital resources must be imported—construction materials, food, and especially fuel for generating electricity and heat. Modern technology and construction techniques may soon permit the long-term, self-sufficient colonization of such extreme environments.Why would anyone want to live there? Exceptional scientific research aside, the Arctic is though to be rich in mineral resources (oil in particular). The Antarctic is covered by an ice sheet over a mile thick, making any mineral resources it may have difficult to access. Its biological resources, however, have great potential. Many organisms adapted to extreme cold have evolved unusual biochemical processes, which can be leveraged into valuable industrial or medical techniques. Alexander Bolonkin and Richard Cathcart are firm believers in the value of this chilling territory. “Many people worldwide, especially in the Temperate Zones, muse on the possibility of humans someday inhabiting orbiting Space Settlements and Moon Bases, or a terraformed Mars” Bolonkin points out, “but few seem to contemplate an increased use of ~25% of Earth’s surface—the Polar Regions.”Indeed, the question of space exploration is intriguing. We would all like to know whether there is life on Mars, but robot probes can only perform the experiments they take along with them. Only humans are flexible enough to explore a new territory in detail and determine whether there are enough resources to sustain a long-term presence. Does modern technology really permit the design of lightweight, energy-efficient habitats suitable for other worlds? Greenhouse LivingThe Sun provides the Earth and Moon with about 1400 Watts per square meter, which is ample energy to warm a habitat even when the angle of the incident light and losses due to reflection are taken into account. On Mars, the sunshine is a little less than half as strong—which means that the equator of Mars receives about as much solar energy as the higher latitudes of Earth (Iceland, for example). The most efficient way to generate heat from sunlight is, of course, the well-known “greenhouse” effect. Given a transparent or translucent roof, any structure can hold onto the energy of sunlight long enough to transform it into heat. Glass works well for this, but glass is heavy and expensive to transport.Some good alternatives to glass are now available, however, and more options are on the way. Innovative manufacturing techniques have created many useful composite materials, including translucent, flexible membranes such as Saint-Gobain’s Sheerfill®. While these materials are certainly more expensive than glass, very little is required to construct a useful shelter.In a recent article submitted to arXiv.org , Bolonkin and Cathcart have designed an inflatable, translucent dome that can heat its interior to comfortable temperatures using only the weak sunlight of high latitudes. While many details remain to be worked out, the essential concept is sound. To improve the energy efficiency of the structure, they propose adding multiple insulating layers, aluminum-coated shutters, and a fine electrical network to sense damage to the structure. The dome would be supported entirely by the pressure of the air inside, which can be adjusted to compensate for the added buoyancy caused by high winds. The principle advantages of this design are the low weight and flexibility of the material. If only a few people at a time need shelter, an enclosure the size of a small house would weigh only about 65 kg, or as much as a person. This is light enough even for a space mission, and setting up would be as easy as turning on an air pump. For large colonies, enough membrane to enclose 200 hectares would weigh only 145 tons. The interior would be warm and sheltered, a safe environment for the construction of more traditional buildings and gardens.Bolonkin and Cathcart have attracted attention with their proposal, but a prototype has not yet been constructed.Notes: Source: 1996 report on the U.S. Antarctic Program by the National Science and Technology Council; www.nsf.gov/pubs/1996/nstc96rp/chiv.htm Source: Sam Johnston, “Recent Trends in Biological Prospecting”, UN University Institute for Advanced Studies; www.ias.unu.edu/sub_page.aspx?catID=35&ddlID=20 xxx.arxiv.org/abs/physics/0701098By Ben Mathiesen, Copyright 2007 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.
Month: August 2019
(PhysOrg.com) — As one of the newest research areas today, the field of magnonics is attracting researchers for many reasons, not the least being its possible role in the development of transistor-less logic circuits. Information presented at the first conference on magnonics last summer in Dresden has spurred a cluster of papers that focus on the recent progress in the field. In one of these studies, Alexander Khitun, Mingqiang Bao, and Kang L. Wang from the University of California at Los Angeles have shown that magnonic logic circuits could offer some significant advantages – in spite of some disadvantages – that may allow them to not only compete with but also outperform transistor-based CMOS logic circuits. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Copyright 2010 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. While the amplitude-encoding approach has benefits including low power consumption due to the low energy of the spin wave signal, the researchers here think that the phase-encoding approach is more promising. This is because the phase-encoding approach enables different frequencies to be used as separate information channels, allowing parallel data processing in the same device. The capability of multi-channel data processing would provide a fundamental advantage over existing switch-based logic circuitry, and could lead to performance rates beyond the limits of today’s technology.“The greatest potential advantage of magnonic logic circuits is the ability to process information in parallel on different frequencies, which is not possible for CMOS-based logic,” Khitun told PhysOrg.com. Khitun, Bao, and Wang have previously fabricated a prototype magnonic device that operates in the GHz frequency range and at room temperature. However, in order for magnonic logic circuits to take advantage of their potential benefits, researchers will have to find solutions to several challenges. For instance, current prototypes will require increased energy efficiency and will need to be scaled down to the submicrometer range in order to compete with CMOS logic circuits. In comparison, there is still plenty of room to scale down the size of transistors, although power dissipation will likely make further scaling inefficient in the CMOS architecture. Another challenge for the magnonic phase-encoding approach in particular is the requirement for a bi-stable phase element to provide the output on two phases. In their analysis, the researchers note that one candidate is a device called the magnetic parametron, which was invented in the early days of magnetic computers more than 50 years ago. Interestingly, the parametron-based magnetic computers originally competed with transistor-based computers, which eventually proved to be the better option. Yet the magnetic parametron may now provide magnonic logic circuits the ability to live up to their potential.Other challenges for magnonic logic circuits include minimizing the inductive crosstalk between input and output ports, demonstrating some components of the circuits that have not yet been realized, and ensuring that the spin wave devices are compatible with conventional electron-based devices to enable efficient data exchange. Although the development of high-performance magnonic logic circuits will face challenges, Khitun, Bao, and Wang conclude that the advantages are significant enough to justify extensive research. Overall, the researchers predict that, even if the magnonic logic circuits don’t fully replace CMOS logic circuits, they may provide complementary components by offering low-power-consuming hardware for certain general and special task data processing. More information: Alexander Khitun, Mingqiang Bao, and Kang L. Wang. “Magnonic logic circuits.” J. Phys. D: Appl. Phys. 43 (2010) 264005 (10pp). doi:10.1088/0022-3727/43/26/264005 Explore further This figure compares CMOS logic and magnonic logic in terms of throughput (the number of operations per area per time) as a function of the minimum feature size, which is the gate length for CMOS and the wavelength for a spin wave circuit. According to the projected estimates, spin logic may provide a throughput advantage of more than three orders of magnitude over CMOS due to the fact that the throughput of the spin circuit is inversely proportional to the wavelength. However, the throughput of demonstrated spin logic prototypes is currently far below current CMOS technology. Image credit: Alexander Khitun, et al. Spintronic transistor is developed The field of magnonics gets its name from spin waves and their associated quasi-particles called magnons, which have attracted scientific interest since the 1950s. Spin waves can generate collective spin excitations in magnetically ordered materials; by controlling the surrounding magnetic field, researchers can also control spin excitations and use them, for example, to carry and process information. Over the past few years, researchers have been investigating how to exploit spin wave phenomena to make logic circuits, which are the basis of data processing in electronic devices. Whereas CMOS logic circuits use electric current to store and transfer data, magnonic logic circuits use spin waves propagating in magnetic waveguides. By avoiding electric currents, magnonic logic circuits have the potential to enable more efficient data transfer and enhanced logic functionality, including parallel data processing.On the other hand, spin waves are known to have properties that present disadvantages for data processing, which include having a group velocity that is more than 100 times slower than the speed of light, and an attenuation (reduction of signal strength) that is more than 1,000,000 times higher than for photons. However, as chip density has increased and the distances between components have become smaller, the slow velocity and high attenuation have become less problematic. Now, fast signal modulation has become more important, which spin waves can provide due to their short wavelength and long coherence length. As the researchers explain in their analysis, a magnonic logic circuit can encode a bit of information in two ways: through either the amplitude or the phase of the spin wave. In the first working spin wave-based logic device demonstrated in 2005, Mikhail Kostylev and coauthors used the amplitude-encoding approach. They split the spin wave into two paths, which would later interfere with each other either constructively or destructively. The interference creates two opposite amplitudes that represent the 0 and 1 logic states. In the second approach, a spin wave propagating through an inverter waveguide undergoes a half-wavelength phase change. The original phase ‘0’ and the inverted phase ‘π’ can then be used to represent the logic states 0 and 1, respectively. Citation: Researchers analyze the future of transistor-less magnonic logic circuits (2010, June 28) retrieved 18 August 2019 from https://phys.org/news/2010-06-future-transistor-less-magnonic-logic-circuits.html
The researchers, Rebecca Sainidou from the Spanish National Research Council (CSIC), Jan Renger from the Institute of Photonic Sciences (ICFO), and coauthors from various institutes in Spain, have published their study on the new method for dielectric light enhancement in a recent issue of Nano Letters. As the scientists explain, one of the biggest problems for nanophotonic devices made of metal is that the metals in these devices absorb some light, limiting the overall light intensity. Here, the researchers proposed using dielectric rather than metallic structures, and described three different arrangements for achieving a large light enhancement: dielectric waveguides, dielectric particle arrays, and a hybrid of these two structures. In each of the three proposed arrangements, the researchers show that, by suppressing absorption losses, light energy can be piled up in resonant cavities to create extremely intense optical fields. “Metallic structures can produce a similar level of enhancement via localized plasmon excitation, but only over limited volumes extended a few nanometers in diameter,” coauthor Javier García de Abajo from CSIC told PhysOrg.com. “In contrast, our work involves a huge enhancement over large volumes, thus making optimum use of the supplied light energy for extended biosensing applications and nonlinear optics. In metallic structures, absorption can be a problem because of potential material damage and because it reduces the available optical energy in the region of enhancement. This type of problem is absent in our dielectric structures.“One could obtain large light intensity enhancement just by simply accumulating it from may sources (e.g., by placing the ends of many optical fibers near a common point in space, or by collecting light coming from many large-scale mirrors). But this sounds like wasting a lot of optical energy just to have an enhancement effect in a small region of space. However, this is essentially what metallic structures do to concentrate light in so-called optical hot-spots using plasmons. In contrast, our structures do not concentrate the light in tiny spaces: they amplify it over large volumes, and this has important applications. This amplification is done through the use of evanescent and amplifying optical waves, which do not transport energy, but can accumulate it.” Although theoretically there is no upper limit to the intensity enhancement that these structures can achieve, fabrication imperfections limit the enhancement to about 100,000 times that of the incident light intensity. In a proof-of-principle demonstration of the dielectric waveguide arrangement, the researchers showed a light intensity enhancement of a factor of 100. The researchers predict that this moderate enhancement should be easily improved by reducing the interface roughness through more careful fabrication, and are currently working on experiments to demonstrate a larger light enhancement.As the researchers explain, part of the “holy grail” of designing nanodevices for optical applications is the ability to control light enhancement, as well as light confinement and subwavelength light guiding. By demonstrating the possibility of achieving an extremely large light intensity in large volumes, the researchers have opened up new possibilities in many nanophotonics applications. For example, nanophotonics components have already been used to produce artificial magnetism, negative refraction, cloaking, and for biosensing.“Certain molecules are produced in our bodies preferentially when we suffer some illnesses (e.g., tumors, infections, etc.),” García de Abajo said. “The detection of these molecules can sometimes be a difficult task, because they are seldom encountered in minute concentrations. A practical way of detecting these molecules, and thus unveiling the potential illness to which they are associated, is by illuminating them and seeing how they scatter or absorb light (e.g., how light of different colors is absorbed by these molecules or how they change the color of the light). Therefore, it is important to amplify the optical signal that these molecules produce, so that we can have access to them even if they are in very low concentrations. Our structures do precisely that: they amplify the light over large volumes, so that if the molecules to be detected are placed inside those volumes, they will more easily produce the noted optical signal (absorption, color change, etc.). This is thus a practical way of detecting diseases such as cancer.“In a different direction, light amplification is useful to produce a nonlinear response to the external light, and this can be directly applied to process information encoded as optical signals. This is an ambitious goal that is needed to fabricate optical computers. Such computers are still far from reachable, but they are expected to produce a tremendous increase in the speed of computation and communication. Our structures provide an innovative way of using light in devices for information processing.” Citation: Extraordinary light enhancement technique proposed for nanophotonic devices (2010, November 3) retrieved 18 August 2019 from https://phys.org/news/2010-11-extraordinary-technique-nanophotonic-devices.html Explore further More information: Rebecca Sainidou, et al. “Extraordinary All-Dielectric Light Enhancement over Large Volumes.” Nano Letters, ASAP. DOI: 10.1021/nl102270p Breakthrough in nano-optics: Researchers develop plasmonic amplifier (PhysOrg.com) — In a new study, scientists have shown that simply tailoring the nanoscale geometrical parameters of dielectric structures can result in an increase in the light intensity to unprecedented levels. Theoretically, they calculate that the light intensity could be increased to up to 100,000 times that of the incident intensity over large volumes. This large light enhancement could lead to new developments in all-optical switching and biosensing applications. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Copyright 2010 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.
Explore further HIRO III lets you feel what you see on screen (w/ Video) Citation: Robotics team finds artificial fingerprints improve tactile abilities (2011, September 21) retrieved 18 August 2019 from https://phys.org/news/2011-09-robotics-team-artificial-fingerprints-tactile.html Schematic of the indentation process. (a) Flat surface being applied to the ridged skin cover. (b) Curved surface being applied to the ridged skin cover. Image credit: DOI:10.3390/s110908626 This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. More information: Artificial Skin Ridges Enhance Local Tactile Shape Discrimination, Saba Salehi, John-John Cabibihan, Shuzhi Sam Ge, arXiv:1109.3688v1 [physics.med-ph] DOI:10.3390/s110908626AbstractOne of the fundamental requirements for an artificial hand to successfully grasp and manipulate an object is to be able to distinguish different objects’ shapes and, more specifically, the objects’ surface curvatures. In this study, we investigate the possibility of enhancing the curvature detection of embedded tactile sensors by proposing a ridged fingertip structure, simulating human fingerprints. In addition, a curvature detection approach based on machine learning methods is proposed to provide the embedded sensors with the ability to discriminate the surface curvature of different objects. For this purpose, a set of experiments were carried out to collect tactile signals from a 2 times 2 tactile sensor array, then the signals were processed and used for learning algorithms. To achieve the best possible performance for our machine learning approach, three different learning algorithms of Na”ive Bayes (NB), Artificial Neural Networks (ANN), and Support Vector Machines (SVM) were implemented and compared for various parameters. Finally, the most accurate method was selected to evaluate the proposed skin structure in recognition of three different curvatures. The results showed an accuracy rate of 97.5% in surface curvature discrimination.via ArXiv Blog As with many areas of science, even the seemingly simple stuff turns out to be quite complicated on closer view. The human fingertip for example, covered with skin unlike that of any other body part, has raised ridges that allow people to feel the difference in texture between wood and metal or silk and linen. It can also detect temperature, and as it turns out, is also involved in figuring out the curvature of objects that are touched. Consider for example, the keys on a cell phone, or a television remote control. It’s these kinds of abilities that Saba Salehi, John-John Cabibihan and Shuzhi Sam Ge are trying to emulate in their lab in Singapore. To begin, they’ve started with the easiest of the bunch, trying to figure out if artificial fingerprints fitted on a robot hand can tell how roundish an object is.To find out they built a touch sensor comprised of a base plate, embedded sensors and a raised ridged surface; all on a 4mm square. They then set about testing the simple sensor in a variety ways to see if they were able to sense things with it in different ways, specifically as it was applied to flat, edged and curved objects. They also built an identical sensor except that the raised portion was flat instead of ridged, to serve as a control.They found that the raised sensor did indeed provide more feedback (resonance) information than the one with the flat surface, so much so that they were able to tell the difference in the three types of objects with 95.7% accuracy.Undoubtedly, more research will be done in this area by this group and others, and perhaps very soon, robot fingertips will become just as sensitive, if not more, than our own, leading to a whole new generation of gentler robots, able to perform tasks with both dexterity and a deft touch. © 2011 PhysOrg.com (PhysOrg.com) — Over the past couple of decades, many people in and out of the science community have watched the steady progress being made in robotics. It’s an exceptionally interesting field due to the anthropomorphic nature of the results. Each new step brings such machines closer to emulating us even as we look forward to the next step. One interesting thing about robotics is that certain areas seem to be advancing faster than others. Robot arms for example are old news, new research is focused more on hand movements. And has advances in hand movements have been made, more research has come to focus on finger movements and finally tactile sensations. Now new work by a trio of researches from the National University of Singapore describe in their paper published on the preprint server arXiv, how affixing artificial fingerprints to robot fingers can increase tactile “sensation” allowing such a robot to discern the differences in curvature of objects.
Society warns cuckoo bird in danger of extinction The group not only tagged the birds with backpacks, which cost about £2,000 each, but also named them (Lyster, Chris, Clement, Martin and Kasper) and allowed others to track their journey via Google Maps. Unfortunately, only two of the group managed to survive the journey (Lyster and Chirs, though there is still some hope for Kasper) from Norfolk in England to the Congo and back, but the collars did provide a clear map of the migration routes of the birds, which oddly, were quite different for each bird, even as they all ended up in nearly the same place for their winter stay. Citation: British ornithologists track cuckoo birds migration route (2012, May 7) retrieved 18 August 2019 from https://phys.org/news/2012-05-british-ornithologists-track-cuckoo-birds.html All of the birds made it to the Congo, it was in coming back that they ran into trouble. To do so, they have to stop and fill up twice; once before crossing the Sahara desert, then again before crossing the Mediterranean Sea. And though the test was of just of one small group migrating once, BTO members are already hypothesizing that it’s possible the birds might be finding it more difficult to fill up properly before crossing the big hazards, than in years past, which would account for fewer of them surviving the trip north.One surprise the group found was that the cuckoos all veered slightly west, towards Cameroon, before heading due north, for the return trip, which shouldn’t have been a surprise after all, as that allows for traveling over the narrowest part of the desert. There are other hazards as well though, they found, one bird, Martin apparently met his demise after wandering into a violent hail storm.Despite the losses, the team views the study as a success. Much more is now known about the migration of the cuckoo bird and because of that, some efforts might begin to help more of the birds survive the round trip each year, thus preventing them from disappearing altogether. Lyster, pre-migration. Image: BTO More information: www.bto.org/science/migration/ … yster/finding-lyster (Phys.org) — Nowhere it seems, are bird watchers more enthusiastic than in Britain, where groups congregate to watch and discuss the most intimate details of their favorite fowl. Of consternation to such groups however is the decline of several favorite species, one of which is the cuckoo, which has seen a nearly fifty percent drop in numbers in just the past couple of decades. Making matters even more frustrating has been the lack of data on the birds which might offer clues as to why their numbers are dropping. Now, one group, the British Trust for Ornithology (BTO) has taken matters into its own hands by capturing and fitting five wild cuckoos with tiny radio backpacks to allow for tracking of the birds during their annual migration. The hope is that by tracking the birds to see where some die, efforts can be made to help them survive. © 2012 Phys.Org Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Nishiwaki made special note of something called Babinet-BPM: “We’ve developed a completely new analysis method, called Babinet-BPM. Compared with the usual FDTD method, the computation speed is 325 times higher, but it only consumes 1/16 of the memory. This is the result of a three-hour calculation by the FDTD method. We achieved the same result in just 36.9 seconds.”FDTD stands for finite-difference time-domain and BPM stands for beam propagation method. Both are numerical analysis techniques. Panasonic’s work is also described in Nature Photonics, in a study called “Efficient colour splitters for high-pixel-density image sensors.” The authors said, “We experimentally demonstrate that this principle of colour splitting based on near-field deflection can generate color images with minimal signal loss.” Citation: Panasonic tech fixes color setbacks in low light photos (w/ video) (2013, March 29) retrieved 18 August 2019 from https://phys.org/news/2013-03-panasonic-tech-setbacks-photos-video.html “Conventional color image sensors use a Bayer array [the arrangement of color filters used in imaging sensors in digital cameras, camcorders, and scanners to create a color image]. The filter pattern is 50 percent green, 25 percent red and 25 percent blue in which a red, green, or blue light-transmitting filter is placed above each sensor. These filters block 50 to 70 percent of the incoming light before it even reaches the sensor,” according to a Panasonic release. Seeing demand for higher-sensitivity cameras on the rise, Panasonic sought a new solution to enable sensors to capture “uniquely vivid” color images. In the video, Seiji Nishiwaki commented further: “Here, color filters aren’t used. So light can be captured without loss, which enables us to achieve approximately double the sensitivity.”Nishiwaki said Panasonic’s technology can be used on different types of sensors, whether CCD, CMOS, or BSI and can be in step with current semiconductor fabrication processes. He said the new approach would not require any special materials or processes.According to DigInfo TV: “The image sensor uses two types of color splitters: red deflectors and blue deflectors.The red and blue deflectors are arranged diagonally, with one of each for every four pixels. RGB values can be obtained by determining the intensity of light reaching each of the four pixels. For example, if white light enters each pixel, pixels where it doesn’t pass through a deflector receive unmodified white light. But in pixels with a red deflector, the light is split into red diffracted light and cyan non-diffracted light. And when white light passes through a blue deflector, it’s split into blue diffracted light and yellow non-diffracted light. As a result, the pixel arrangement is cyan, white + red, white + blue, and yellow. The RGB values are then calculated using a processing technique designed specifically for mixed color signals.” © 2013 Phys.org Panasonic develops a next-generation robust image sensor (Phys.org) —Panasonic’s new color filtering technology is in the news this week after a video from DigInfo TV presented what imaging experts at Panasonic have been up to, and that is using “micro color splitters,” which achieve twice the brightness than before possible. These micro color splitters replace a traditional filter array over the image sensor. The result from the new approach is especially relevant for those working with low light photography—situations wherever there is less than daytime light outside, or any indoor photography without much ambient light. The researchers found their new approach could almost double the brightness in photos taken in low light environments. Saying no to traditional color filters, the researchers wanted a technique where light is captured without any loss. More information: Nature Photonics paper: www.nature.com/nphoton/journal … photon.2012.345.htmlVia Diginfo.tv Journal information: Nature Photonics Explore further The problem has been that image sensors have produced color pictures by using red, green, and blue filters for each pixel, but with that system, 50 percent to 70 percent of the light is lost. The micro color splitters control the diffraction of light at a microscopic level. Panasonic’s imaging experts said that they achieved approximately double the color sensitivity in comparison with conventional sensors that use color filters. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
© 2014 Phys.org. All rights reserved. Venn diagram showing the relationship between the assumptions of cognitive realism and cognitive completeness, and their overlap, which defines classical cognitive models. Quantum models satisfy cognitive completeness but not cognitive realism, and a model in the class ‘X’ would satisfy cognitive realism but not cognitive completeness. Credit: Yearsley and Pothos. ©2014 The Royal Society More information: James M. Yearsley and Emmanuel M. Pothos. “Challenging the classical notion of time in cognition: a quantum perspective.” Proceedings of The Royal Society B. DOI: 10.1098/rspb.2013.3056 Journal information: Proceedings of the Royal Society B This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. “There are two lines of thought when it comes to using quantum theory to describe cognitive processes,” James M. Yearsley, a researcher in the Department of Psychology at City University London, told Phys.org. “The first is that some decision-making processes appear quantum because there are physical processes in the brain (at the level of neurons, etc.) that are quantum. This is very controversial and is a position held by only a minority. The second line of thought is that basic physical processes in the brain at the level of neurons are classical, and the (apparent) non-classical features of some human decision-making arises because of the complex way in which thoughts and feelings are related to basic brain processes. This is by far the more common viewpoint, and is the one we personally subscribe to.” Memory constructionIn their study, Yearsley and Emmanuel M. Pothos, also at City University London, have proposed that quantum probability theory may be used to assign probabilities to how precisely our thoughts, decisions, feelings, memories, and other cognitive variables can be recalled and defined over time. In this view, recalling a memory at one point in time interferes with how we remember perceiving that same memory in the past or how we will perceive it in the future, much in the way a measurement may change the outcome of something being measured. This act of recall is sometimes called “constructive” because it can change (or construct) the recalled thoughts. In this view, the memory itself is essentially created by the act of remembering.As Yearsley explains, the idea that measurements might be constructive in cognition can be understood with an example of chocolate cravings.”It’s a little bit like how you can be sitting at your desk happily working away until one colleague announces that they are popping out to the shop and would you like anything, at which point you are overcome with a desire for a Twix!” he said. “That desire wasn’t there before your colleague asked, it was created by that process of measurement. In quantum approaches to cognition, cognitive variables are represented in such a way that they don’t really have values (only potentialities) until you measure them. That’s a bit like saying as it gets towards lunchtime there is an increased potentiality for you to say you’d like a Twix if someone asks you, but if you’re hard at work you might still not be thinking consciously about food. Of course, this analogy isn’t perfect.” Explore further (Phys.org) —The way that thoughts and memories arise from the physical material in our brains is one of the most complex questions in modern science. One important question in this area is how individual thoughts and memories change over time. The classical, intuitive view is that every thought, or “cognitive variable,” that we’ve ever had can be assigned a specific, well-defined value at all times of our lives. But now psychologists are challenging that view by applying quantum probability theory to how memories change over time in our brains. Citation: In quantum theory of cognition, memories are created by the act of remembering (2014, March 17) retrieved 18 August 2019 from https://phys.org/news/2014-03-quantum-theory-cognition-memories.html Quantum communication scheme provides guaranteed security without quantum memories This quantum view of memory is related to the uncertainty principle in quantum mechanics, which places fundamental limits on how much knowledge we can gain about the world. When measuring certain kinds of unknown variables in physics, such as a particle’s position and momentum, the more precisely one variable can be determined, the less precisely the other can be determined. The same is true in the proposed quantum view of cognitive processes. In this case, thoughts are linked in our cognitive system over time, in much the same way that position and momentum are linked in physics. The cognitive version can be considered as a kind of entanglement in time. As a result, perfect knowledge of a cognitive variable at one point in time requires there to be some uncertainty about it at other times. Overturning classical assumptionsThe scientists explain that this proposal can be tested by performing experiments that try to violate the so-called temporal versions of the Bell inequalities. In physics, violation of the temporal Bell inequalities signifies the failure of classical physics to describe the physical world. In cognitive science, the violations would signify the failure of classical models of cognition that make two seemingly intuitive assumptions: cognitive realism and cognitive completeness.As the scientists explain, cognitive realism is the assumption that all of the decisions a person makes can be entirely determined by processes at the neurophysiological level (although identifying all of these processes would be extremely complicated). Cognitive completeness is the assumption that the cognitive state of a person making a decision can be entirely determined by the probabilities of the outcomes of the decision. In other words, observing a person’s behavior can allow an observer to fully determine that person’s underlying cognitive state, without the need to invoke neurophysiological variables.Neither of these assumptions is controversial; in fact, both are central to many kinds of cognitive models. A quantum model, however, does not rely on these assumptions.”I think the greatest significance of this work is that it succeeds in taking the widely held belief that cognitive variables such as judgments or beliefs always have well-defined values and gives us a way to put that intuition to experimental test,” Yearsley said. “Also, assuming we do find a violation of the temporal Bell inequalities experimentally, we would be ruling out not just a single model of cognition, but actually a very large class of models, so it’s potentially a very powerful result.”Interpreting a possible violation of a temporal Bell inequality is not straightforward, since one would have to decide which of the two assumptions—realism or completeness—should be abandoned. The researchers argue that for the purposes of creating models of cognition it makes more sense to assume that cognitive realism is not valid, thus rejecting the idea that decisions can thought of as being be fully determined by underlying neurophysiological processes. A key implication would be that an individual may not have a well-defined judgment at all points in time, which may offer insight into aspects of cognition which have so far resisted formal explanation. One such example is the creation of false memories. The scientists hope that future research will help clarify the role of quantum probability in cognitive modeling, and shed light on the complicated process that make up all of our memories, thoughts, and identities.
Dr. P. James Schuck discussed the paper that he, Dr. Bruce E. Cohen, Dr. Daniel J. Gargas, Dr. Emory M. Chan, and their co-authors published in Nature Nanotechnology, starting with the main challenges the scientists encountered in: developing luminescent probes with the photostability, brightness and continuous emission necessary for single-molecule microscopydeveloping sub-10 nm lanthanide-doped upconverting nanoparticles (UCNPs) an order of magnitude brighter under single-particle imaging conditions than existing compositions, lanthanides being transition metals with properties distinct from other elements”The most common emitters used for single-molecule imaging – organic dyes and quantum dots – have significant limitations that have proven extremely challenging to overcome,” Schuck tells Phys.org. He explains that organic dyes are generally the smallest probes (typically ~1 nm in size), and will randomly turn on and off. This “blinking” is quite problematic for single-molecule imaging, he continues, and typically after emitting roughly 1 million photons will always photobleach – that is, turn off permanently. “This may sound like a lot of photons at first,” Schuck says, “but this means that the dyes stop emitting after only about 1 to 10 seconds under most imaging conditions. UCNPs never blink.”Moreover, Schuck continues, it turns out the same problems exist for fluorescent quantum dots, or Qdots, as well. However, while it is possible to make Qdots that will not blink or photobleach, this usually requires the addition of layers to the Qdot, which makes them too large for many imaging applications. (A quantum dot is a semiconductor nanocrystal small enough to exhibit quantum mechanical properties.) “Our new UCNPs are small, and do not blink or bleach.”Due to these properties, he notes, UCNPs have recently generated significant interest because they have the potential to be ideal luminescent labels and probes for optical imaging – but the major roadblock to realizing their potential had been the inability to design sub-10 nm UCNPs bright enough to be imaged at the single-UCNP level. © 2014 Phys.org Schuck mentions another advantage of upconverting nanoparticles – namely, they operate by absorbing two or more infrared photons and emitting higher-energy visible light. “Since nearly all other materials do not upconvert, when imaging the UCNPs in a sample, there is almost no other autofluorescent background originating from the sample. This results into good imaging contrast and large signal-to-background levels.” In addition, while organic dyes and Qdots can also absorb IR light and emit higher-energy light via a nonlinear two+ photon absorption process, the excitation powers needed to generate measurable two-photon fluorescence signals in dyes and small Qdots is many orders of magnitude higher than is needed for generating upconverted luminescence from UCNPs. “These high powers are generally bad for samples and a big concern in bioimaging communities” Schuck emphasizes, “where they can lead to damage and cell death.” Schuck notes that two other key aspects central to the discoveries mentioned in the paper – using advanced single-particle characterization, and theoretical modeling – were a consequence of the multidisciplinary collaborative environment at the Foundry. “This study required us to combine single-molecule photophysics, the ability to synthesize ultrasmall upconverting nanocrystals of almost any composition, and the advanced modeling and simulation of UCNP optical properties,” he says. “Accurately simulating and modeling the photophysical behavior of these materials is challenging due to the large number of energy levels in these materials that all interact in complex ways, and Emory Chan has developed a unique model that objectively accounts for all of the over 10,000 manifold-to-manifold transitions in the allowed energy range.”Previously, Schuck says that the conventional wisdom for designing bright UCNPs had been to use a relatively small concentration of emitter ions in the nanoparticles, since too many emitters will lead to lower brightness due to self-quenching effects once the UCNP emitter concentration exceeds ~1%. “This turns out to be true if you want to make particles that are bright under ensemble imaging conditions – that is, where a relatively low excitation power is used – since you have many particles signaling collectively,” Schuck explains. “However, this breaks down under single-molecule imaging conditions.” In their paper, the researchers have demonstrated that under the higher excitation powers used for imaging single particles, the relevant energy levels become more saturated and self-quenching is reduced. “Therefore,” Schuck continues, “you want to include in your UCNPs as high a concentration of emitter ions as possible.” This results in the nanoparticles being almost non-luminescent at low-excitation-power ensemble conditions due to significant self-quenching, but ultra-bright under single-molecule imaging conditions. UCNP size-dependent luminescence intensity and heterogeneity. a, Deviation of single UCNP luminescence intensity normalized to particle volume from ideal volumetric scaling (n¼300 total). The curve represents calculated intensity normalized to volume for UCNPs with a nonluminescent surface layer of 1.7 nm. Only intensities from single, unaggregated nanocrystals, as determined by Supplementary Fig. 5, are used. The top inset shows a diagram representing an ideal nanocrystal in which with all included emitters are luminescent (green circles). The bottom inset is a diagram representing a nanocrystal with emitters that are nonluminescent (maroon circles) in an outer surface layer. b, Fine spectra of the green emission bands collected from four single 8 nm UCNPs (curves 1–4) and their averaged spectra (curve Sigma). Credit: Courtesy Daniel Gargas, Emory Chan, Bruce Cohen, and P. James Schuck, The Molecular Foundry, Lawrence Berkeley National Laboratory Experimental Setup for single UCNP optical characterization. A 980nm laser is prefocused with a 500mm lens before entering the back aperture of a 0.95 NA 100x Objective (Zeiss), which adjusts the focal plane of the laser closer to that of the visible luminescence (dashed line). Emitted light is collected back through the same objective, filtered by two 700nm short-pass filters and two 532nm long-pass filters (Chroma) to remove residual laser light, and focused onto a single photon counting APD (MPD) or routed to a LN-cooled CCD spectrometer (Princeton Instruments) with 1200 grooves/mm grating. A Time-Correlated Single Photon Counter (Picoquant) is used for luminescence lifetime measurements. All experiments were performed in ambient conditions at 106/cm2 unless otherwise noted. Power-dependent data and single particle line-cuts shown in Fig 4 were collected with a 1.4 NA 100x oil immersion objective (Nikon). Credit: Courtesy Daniel Gargas, Emory Chan, Bruce Cohen, and P. James Schuck, The Molecular Foundry, Lawrence Berkeley National Laboratory Another important implication of this finding, Schuck adds, is that it should change how people will screen for the best single-molecule luminescent probes in the future. “Until now,” he notes, “people would first look to see which probes were bright using ensemble-level conditions, then would investigate only that subset as possible single-molecule probes. Our new probes would, of course, have failed that screening test!” Schuck again emphasizes that “a key reason this discovery happened is that we have experts in all key areas in the same building, and we were able to quickly iterate through the theory-synthesis-characterization cycle.”Regarding future research directions, notes Schuck, the scientists are pursuing a few different avenues. “We’d certainly like to now use these newly-designed UCNPs for bioimaging….so far, we’ve only investigated the fundamental photophysical properties of these particles when they’re isolated on glass. We believe one exciting and important application will be their use in brain imaging – particularly for deep-tissue in vivo optical imaging of neurons and brain function. In closing, Schuck mentions other areas of research that might benefit from their study. “I think a primary application is in single-particle tracking within cells. For example,” he illustrates, “labeling specific proteins with individual UCNPs and tracking them to understand their cellular kinetics.” Along different lines, Schuck adds, it turns out that UCNPs are also excellent probes of very local electromagnetic fields. “This is because lanthanides have a rather unique set of photophysical properties such as relatively prevalent magnetic dipole emission, allowing us to probe optical magnetic fields, and very long lifetimes such that transitions are not strongly allowed, which allows us to more-easily probe cavity quantum optical effects such as the Purcell enhancement of emission. In fact, Schuck concludes, an experiment that uses UCNPs to report on the near-field strengths and field distributions surrounding nanoplasmonic devices is just underway.” When imaging at the single-molecule level, small irregularities known as heterogeneities become apparent – features that are lost in higher-scale, so-called ensemble imaging. At the same time, it has until recently been challenging to develop luminescent probes with the photostability, brightness and continuous emission necessary for single-molecule microscopy. Now, however, scientists in the Molecular Foundry at Lawrence Berkeley National Lab, Berkeley, CA have developed upconverting nanoparticles (UCNPs) under 10 nm in diameter whose brightness under single-particle imaging exceeds that of existing materials by over an order of magnitude. The researchers state that their findings make a range of applications possible, including cellular and in vivo imaging, as well as reporting on local electromagnetic near-field properties of complex nanostructures. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Explore further Bright future for protein nanoprobes Citation: Bright lights, small crystals: Scientists use nanoparticles to capture images of single molecules (2014, April 22) retrieved 18 August 2019 from https://phys.org/news/2014-04-bright-small-crystals-scientists-nanoparticles.html More information: Engineering bright sub-10-nm upconverting nanocrystals for single-molecule imaging, Nature Nanotechnology 9, 300–305 (2014), doi:10.1038/nnano.2014.29 Journal information: Nature Nanotechnology “This brings me to what is probably the most important takeaway from our work, which is the discovery and demonstration of new rules for designing ultrabright, ultrasmall UCNP single-molecule probes,” Schuck says. In addition, he stresses that these new rules contrast directly with conventional methods for creating bright UCNPs. “As we showed in our paper, we synthesized and imaged UCNPs as small as a single fluorescent protein! For many bioimaging applications, very small – certainly smaller than 10nm – luminescent probes are required because you really need the label or probe to perturb the system they are probing as little as possible.” Luminescence of UCNPs. a, Schematic of energy transfer upconversion with Yb3+ as sensitizer and Er3+ as emitter. b, Minimum peak excitation intensities of NIR light needed for multiphoton single-molecule imaging of various classes of luminescent probes. The peak excitation intensity ranges shown are required to detect signals of 100 c.p.s. Credit: Courtesy Daniel Gargas, Emory Chan, Bruce Cohen, and P. James Schuck, The Molecular Foundry, Lawrence Berkeley National Laboratory
Explore further © 2017 Phys.org Citation: Massive exoplanet discovered using gravitational microlensing method (2017, April 18) retrieved 18 August 2019 from https://phys.org/news/2017-04-massive-exoplanet-gravitational-microlensing-method.html Astronomers discover new substellar companion using microlensing The light curve data for MOA-2016-BLG-227 is plotted with the best-fit model. The top panel shows the whole event, the bottom left and bottom right panels highlight the caustic crossing feature and the second bump due to the cusp approach, respectively. The residuals from the model are shown in the bottom insets of the bottom panels. Credit: Koshimoto et al., 2017. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Gravitational microlensing is an invaluable method of detecting new extrasolar planets circling their parent stars relatively closely. This technique is sensitive to low-mass planets orbiting beyond the so-called “snow line” around relatively faint host stars like M dwarfs or brown dwarfs. Such planets are of special interest for astronomers, as just beyond this line, the most active planet formation occurs. Hence, understanding the distribution of exoplanets in this region could offer important clues to how planets form.The microlensing event MOA-2016-BLG-227 was detected on May 5, 2016 by the Microlensing Observations in Astrophysics (MOA) group using the 1.8 m MOA-II telescope at the University of Canterbury Mt. John Observatory in New Zealand. Afterward, this event was the target of follow-up observations employing three telescopes located on Mauna Kea, Hawaii: the United Kingdom Infra-Red Telescope (UKIRT) 3.8m telescope, the Canada France Hawaii Telescope (CFHT) and the Keck II telescope. VLT Survey Telescope (VST) at ESO’s Paranal Observatory in Chile and the Jay Baum Rich 0.71m Telescope (C28) at the Wise Observatory in Israel were also used for these observations.This subsequent observational campaign allowed the research team led by Naoki Koshimoto of the Osaka University in Japan to detect the new planet and to determine its basic parameters.”The event and planetary signal were discovered by the MOA collaboration, but much of the planetary signal is covered by the Wise, UKIRT, CFHT and VST telescopes, which were observing the event as part of the K2 C9 program (Campaign 9 of the Kepler telescope’s prolonged mission),” the paper reads.The team found that MOA-2016-BLG-227Lb is a super-Jupiter planet with the mass of about 2.8 Jupiter masses. The parent star is most probably an M or K dwarf located in the galactic bulge. The mass of the star is estimated to be around 0.29 solar masses. MOA-2016-BLG-227Lb orbits its host at a distance of approximately 1.67 AU. Other main parameters like the radius of both objects and orbital period of the planet are yet to be determined.”Our analysis excludes the possibility that the host star is a G-dwarf, leading us to a robust conclusion that the planet MOA-2016-BLG-227Lb is a super-Jupiter mass planet orbiting an M or K-dwarf star likely located in the Galactic bulge,” the researchers concluded.The authors call for further investigation of the MOA-2016-BLG-227 event, which could deliver essential more detailed information about the newly found planetary system. They noted that this event should be revisited with the Hubble Space Telescope (HST) and Keck adaptive optics (AO) system. Promising results could also come from future space and ground based telescopes like the James Webb Space Telescope (JWST), the Giant Magellan Telescope (GMT), the Thirty Meter Telescope and the Extremely Large Telescope (ELT). (Phys.org)—Astronomers have found a new massive alien world using the gravitational microlensing technique. The newly detected exoplanet, designated MOA-2016-BLG-227Lb, is about three times more massive than Jupiter and orbits a distant star approximately 21,000 light years away. The finding was published Apr. 6 in a paper on arXiv.org. More information: MOA-2016-BLG-227Lb: A Massive Planet Characterized by Combining Lightcurve Analysis and Keck AO Imaging, arXiv:1704.01724 [astro-ph.EP] arxiv.org/abs/1704.01724AbstractWe report the discovery of a microlensing planet —- MOA-2016-BLG-227Lb —- with a massive planet/host mass ratio of q≃9×10−3. This event was fortunately observed by several telescopes as the event location was very close to the area of the sky surveyed by Campaign 9 of the K2 Mission. Consequently, the planetary deviation is well covered and allows a full characterization of the lensing system. High angular resolution images by the Keck telescope show excess flux other than the source flux at the target position, and this excess flux could originate from the lens star. We combined the excess flux and the observed angular Einstein radius in a Bayesian analysis which considers various possible origins of the measured excess flux as priors, in addition to a standard Galactic model. Our analysis indicates that it is unlikely that a large fraction of the excess flux comes from the lens. We compare the results of the Bayesian analysis using different priors for the probability of hosting planets with respect to host mass and find the planet is likely a gas-giant around an M/K dwarf likely located in the Galactic bulge. This is the first application of a Bayesian analysis considering several different contamination scenarios for a newly discovered event. Our approach for considering different contamination scenarios is crucial for all microlensing events which have evidence for excess flux irrespective of the quality of observation conditions, such as seeing, for example.
Kolkata: Bengal has emerged as the number one state in the country in terms of employment generation in the rural areas, Chief Minister Mamata Banerjee said on Tuesday. The state has topped the list by generating 30.98 crore person-days till March 31 under the Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA).The Chief Minister tweeted: “I am very happy to share with all of you that Bengal has emerged No.1 in the country in rural employment generation. Under 100 days work scheme, as on 31 March 2018, Bengal has generated 30.98 crore person-days, which is the highest in the country.” Also Read – Heavy rain hits traffic, flightsExplaining the expenditure that has been carried out for the project in the last fiscal, she maintained in the tweet: “Moreover, West Bengal reported the expenditure of Rs 8007.56 crore under this scheme in 2017-18, which is again the highest in the country. In terms of average person-days per household, West Bengal with 59 days in 2017-18, is the best performer among the major states.”As per the data of the state Panchayats and Rural Development department, the Bengal government has not only become successful in scoring the first rank in the country in terms of job creation but it has even crossed the target set by the Centre for 2017-18 Financial Year. Also Read – Speeding Jaguar crashes into Merc, 2 B’deshi bystanders killedThe Centre had set a target of 23 crore person-days for Bengal in the 2017-18 fiscal. The state government had become successful in creating 24 crore person-days by the end of December 2017 itself.In the last three months of 2017-18 fiscal, the state government has also become successful in creating another 6.98 crore person-days jobs and the figure total reached to 30.98 till March 31.It may also be mentioned that the state government had created 21 crore person-days in 2016-17 Financial Year while in the 2017-18 fiscal it has reached to 30.98 crore that marked an increase by around 43 percent. This comes at the time when the state Panchayats and Rural development department has set a target of creating 25 crore person-days in 2018-19.There was also a meeting in this connection between senior officials of the state Panchayats and Rural Development department, Gorkhaland Territorial Administration (GTA), Siliguri Mahakuma Parishads and authorities from all districts. It would also ensure the livelihood of around 10 lakh families which would essentially mean that every Gram Panchayat will have to provide jobs to around 300 families.