Month: August 2019

mrxvjtbz

Inflatable Habitats for Polar and Space Colonists

first_img Earth to Mars in 100 days: The power of nuclear rockets Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Humanity has long since established a foothold in the Artic and Antarctic, but extensive colonization of these regions may soon become economically viable. If we can learn to build self-sufficient habitats in these extreme environments, similar technology could be used to live on the Moon or Mars. Citation: Inflatable Habitats for Polar and Space Colonists (2007, January 29) retrieved 18 August 2019 from https://phys.org/news/2007-01-inflatable-habitats-polar-space-colonists.htmlcenter_img Inflatable dome for cold, high-latitude regions on Earth. The main figure (a) shows a cross-section of the suggested biosphere, and the small figure (b) shows a top-down view. The important components are labeled: a thin, transparent double film on the sunlit side (1), a reflective cover on the shaded side (2), control louvers (3), the entrance (5), and an air pump/ventilator (6). The direction of the Sun is indicated by beams of light (4). The average temperature of the Antarctic coast in winter is about –20 °C. As if this weren’t enough, the region suffers from heavy snowfall, strong winds, and six-month nights. How can humanity possibly survive in such a hostile environment? So far we seem to have managed well; Antarctica has almost forty permanently staffed research stations (with several more scheduled to open by 2008). These installations are far from self-sufficient, however; the USA alone spent 125 million dollars in 1995 on maintenance and operations.[1] All vital resources must be imported—construction materials, food, and especially fuel for generating electricity and heat. Modern technology and construction techniques may soon permit the long-term, self-sufficient colonization of such extreme environments.Why would anyone want to live there? Exceptional scientific research aside, the Arctic is though to be rich in mineral resources (oil in particular). The Antarctic is covered by an ice sheet over a mile thick, making any mineral resources it may have difficult to access. Its biological resources, however, have great potential. Many organisms adapted to extreme cold have evolved unusual biochemical processes, which can be leveraged into valuable industrial or medical techniques.[2] Alexander Bolonkin and Richard Cathcart are firm believers in the value of this chilling territory. “Many people worldwide, especially in the Temperate Zones, muse on the possibility of humans someday inhabiting orbiting Space Settlements and Moon Bases, or a terraformed Mars” Bolonkin points out, “but few seem to contemplate an increased use of ~25% of Earth’s surface—the Polar Regions.”Indeed, the question of space exploration is intriguing. We would all like to know whether there is life on Mars, but robot probes can only perform the experiments they take along with them. Only humans are flexible enough to explore a new territory in detail and determine whether there are enough resources to sustain a long-term presence. Does modern technology really permit the design of lightweight, energy-efficient habitats suitable for other worlds? Greenhouse LivingThe Sun provides the Earth and Moon with about 1400 Watts per square meter, which is ample energy to warm a habitat even when the angle of the incident light and losses due to reflection are taken into account. On Mars, the sunshine is a little less than half as strong—which means that the equator of Mars receives about as much solar energy as the higher latitudes of Earth (Iceland, for example). The most efficient way to generate heat from sunlight is, of course, the well-known “greenhouse” effect. Given a transparent or translucent roof, any structure can hold onto the energy of sunlight long enough to transform it into heat. Glass works well for this, but glass is heavy and expensive to transport.Some good alternatives to glass are now available, however, and more options are on the way. Innovative manufacturing techniques have created many useful composite materials, including translucent, flexible membranes such as Saint-Gobain’s Sheerfill®. While these materials are certainly more expensive than glass, very little is required to construct a useful shelter.In a recent article submitted to arXiv.org [3], Bolonkin and Cathcart have designed an inflatable, translucent dome that can heat its interior to comfortable temperatures using only the weak sunlight of high latitudes. While many details remain to be worked out, the essential concept is sound. To improve the energy efficiency of the structure, they propose adding multiple insulating layers, aluminum-coated shutters, and a fine electrical network to sense damage to the structure. The dome would be supported entirely by the pressure of the air inside, which can be adjusted to compensate for the added buoyancy caused by high winds. The principle advantages of this design are the low weight and flexibility of the material. If only a few people at a time need shelter, an enclosure the size of a small house would weigh only about 65 kg, or as much as a person. This is light enough even for a space mission, and setting up would be as easy as turning on an air pump. For large colonies, enough membrane to enclose 200 hectares would weigh only 145 tons. The interior would be warm and sheltered, a safe environment for the construction of more traditional buildings and gardens.Bolonkin and Cathcart have attracted attention with their proposal, but a prototype has not yet been constructed.Notes:[1] Source: 1996 report on the U.S. Antarctic Program by the National Science and Technology Council; www.nsf.gov/pubs/1996/nstc96rp/chiv.htm[2] Source: Sam Johnston, “Recent Trends in Biological Prospecting”, UN University Institute for Advanced Studies; www.ias.unu.edu/sub_page.aspx?catID=35&ddlID=20[3] xxx.arxiv.org/abs/physics/0701098By Ben Mathiesen, Copyright 2007 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.last_img read more

CONTINUE READING
kqbdosxj

Researchers analyze the future of transistorless magnonic logic circuits

first_img(PhysOrg.com) — As one of the newest research areas today, the field of magnonics is attracting researchers for many reasons, not the least being its possible role in the development of transistor-less logic circuits. Information presented at the first conference on magnonics last summer in Dresden has spurred a cluster of papers that focus on the recent progress in the field. In one of these studies, Alexander Khitun, Mingqiang Bao, and Kang L. Wang from the University of California at Los Angeles have shown that magnonic logic circuits could offer some significant advantages – in spite of some disadvantages – that may allow them to not only compete with but also outperform transistor-based CMOS logic circuits. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Copyright 2010 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. While the amplitude-encoding approach has benefits including low power consumption due to the low energy of the spin wave signal, the researchers here think that the phase-encoding approach is more promising. This is because the phase-encoding approach enables different frequencies to be used as separate information channels, allowing parallel data processing in the same device. The capability of multi-channel data processing would provide a fundamental advantage over existing switch-based logic circuitry, and could lead to performance rates beyond the limits of today’s technology.“The greatest potential advantage of magnonic logic circuits is the ability to process information in parallel on different frequencies, which is not possible for CMOS-based logic,” Khitun told PhysOrg.com. Khitun, Bao, and Wang have previously fabricated a prototype magnonic device that operates in the GHz frequency range and at room temperature. However, in order for magnonic logic circuits to take advantage of their potential benefits, researchers will have to find solutions to several challenges. For instance, current prototypes will require increased energy efficiency and will need to be scaled down to the submicrometer range in order to compete with CMOS logic circuits. In comparison, there is still plenty of room to scale down the size of transistors, although power dissipation will likely make further scaling inefficient in the CMOS architecture. Another challenge for the magnonic phase-encoding approach in particular is the requirement for a bi-stable phase element to provide the output on two phases. In their analysis, the researchers note that one candidate is a device called the magnetic parametron, which was invented in the early days of magnetic computers more than 50 years ago. Interestingly, the parametron-based magnetic computers originally competed with transistor-based computers, which eventually proved to be the better option. Yet the magnetic parametron may now provide magnonic logic circuits the ability to live up to their potential.Other challenges for magnonic logic circuits include minimizing the inductive crosstalk between input and output ports, demonstrating some components of the circuits that have not yet been realized, and ensuring that the spin wave devices are compatible with conventional electron-based devices to enable efficient data exchange. Although the development of high-performance magnonic logic circuits will face challenges, Khitun, Bao, and Wang conclude that the advantages are significant enough to justify extensive research. Overall, the researchers predict that, even if the magnonic logic circuits don’t fully replace CMOS logic circuits, they may provide complementary components by offering low-power-consuming hardware for certain general and special task data processing. More information: Alexander Khitun, Mingqiang Bao, and Kang L. Wang. “Magnonic logic circuits.” J. Phys. D: Appl. Phys. 43 (2010) 264005 (10pp). doi:10.1088/0022-3727/43/26/264005 Explore furthercenter_img This figure compares CMOS logic and magnonic logic in terms of throughput (the number of operations per area per time) as a function of the minimum feature size, which is the gate length for CMOS and the wavelength for a spin wave circuit. According to the projected estimates, spin logic may provide a throughput advantage of more than three orders of magnitude over CMOS due to the fact that the throughput of the spin circuit is inversely proportional to the wavelength. However, the throughput of demonstrated spin logic prototypes is currently far below current CMOS technology. Image credit: Alexander Khitun, et al. Spintronic transistor is developed The field of magnonics gets its name from spin waves and their associated quasi-particles called magnons, which have attracted scientific interest since the 1950s. Spin waves can generate collective spin excitations in magnetically ordered materials; by controlling the surrounding magnetic field, researchers can also control spin excitations and use them, for example, to carry and process information. Over the past few years, researchers have been investigating how to exploit spin wave phenomena to make logic circuits, which are the basis of data processing in electronic devices. Whereas CMOS logic circuits use electric current to store and transfer data, magnonic logic circuits use spin waves propagating in magnetic waveguides. By avoiding electric currents, magnonic logic circuits have the potential to enable more efficient data transfer and enhanced logic functionality, including parallel data processing.On the other hand, spin waves are known to have properties that present disadvantages for data processing, which include having a group velocity that is more than 100 times slower than the speed of light, and an attenuation (reduction of signal strength) that is more than 1,000,000 times higher than for photons. However, as chip density has increased and the distances between components have become smaller, the slow velocity and high attenuation have become less problematic. Now, fast signal modulation has become more important, which spin waves can provide due to their short wavelength and long coherence length. As the researchers explain in their analysis, a magnonic logic circuit can encode a bit of information in two ways: through either the amplitude or the phase of the spin wave. In the first working spin wave-based logic device demonstrated in 2005, Mikhail Kostylev and coauthors used the amplitude-encoding approach. They split the spin wave into two paths, which would later interfere with each other either constructively or destructively. The interference creates two opposite amplitudes that represent the 0 and 1 logic states. In the second approach, a spin wave propagating through an inverter waveguide undergoes a half-wavelength phase change. The original phase ‘0’ and the inverted phase ‘π’ can then be used to represent the logic states 0 and 1, respectively. Citation: Researchers analyze the future of transistor-less magnonic logic circuits (2010, June 28) retrieved 18 August 2019 from https://phys.org/news/2010-06-future-transistor-less-magnonic-logic-circuits.htmllast_img read more

CONTINUE READING
ivjmglyp

Extraordinary light enhancement technique proposed for nanophotonic devices

first_img The researchers, Rebecca Sainidou from the Spanish National Research Council (CSIC), Jan Renger from the Institute of Photonic Sciences (ICFO), and coauthors from various institutes in Spain, have published their study on the new method for dielectric light enhancement in a recent issue of Nano Letters. As the scientists explain, one of the biggest problems for nanophotonic devices made of metal is that the metals in these devices absorb some light, limiting the overall light intensity. Here, the researchers proposed using dielectric rather than metallic structures, and described three different arrangements for achieving a large light enhancement: dielectric waveguides, dielectric particle arrays, and a hybrid of these two structures. In each of the three proposed arrangements, the researchers show that, by suppressing absorption losses, light energy can be piled up in resonant cavities to create extremely intense optical fields. “Metallic structures can produce a similar level of enhancement via localized plasmon excitation, but only over limited volumes extended a few nanometers in diameter,” coauthor Javier García de Abajo from CSIC told PhysOrg.com. “In contrast, our work involves a huge enhancement over large volumes, thus making optimum use of the supplied light energy for extended biosensing applications and nonlinear optics. In metallic structures, absorption can be a problem because of potential material damage and because it reduces the available optical energy in the region of enhancement. This type of problem is absent in our dielectric structures.“One could obtain large light intensity enhancement just by simply accumulating it from may sources (e.g., by placing the ends of many optical fibers near a common point in space, or by collecting light coming from many large-scale mirrors). But this sounds like wasting a lot of optical energy just to have an enhancement effect in a small region of space. However, this is essentially what metallic structures do to concentrate light in so-called optical hot-spots using plasmons. In contrast, our structures do not concentrate the light in tiny spaces: they amplify it over large volumes, and this has important applications. This amplification is done through the use of evanescent and amplifying optical waves, which do not transport energy, but can accumulate it.” Although theoretically there is no upper limit to the intensity enhancement that these structures can achieve, fabrication imperfections limit the enhancement to about 100,000 times that of the incident light intensity. In a proof-of-principle demonstration of the dielectric waveguide arrangement, the researchers showed a light intensity enhancement of a factor of 100. The researchers predict that this moderate enhancement should be easily improved by reducing the interface roughness through more careful fabrication, and are currently working on experiments to demonstrate a larger light enhancement.As the researchers explain, part of the “holy grail” of designing nanodevices for optical applications is the ability to control light enhancement, as well as light confinement and subwavelength light guiding. By demonstrating the possibility of achieving an extremely large light intensity in large volumes, the researchers have opened up new possibilities in many nanophotonics applications. For example, nanophotonics components have already been used to produce artificial magnetism, negative refraction, cloaking, and for biosensing.“Certain molecules are produced in our bodies preferentially when we suffer some illnesses (e.g., tumors, infections, etc.),” García de Abajo said. “The detection of these molecules can sometimes be a difficult task, because they are seldom encountered in minute concentrations. A practical way of detecting these molecules, and thus unveiling the potential illness to which they are associated, is by illuminating them and seeing how they scatter or absorb light (e.g., how light of different colors is absorbed by these molecules or how they change the color of the light). Therefore, it is important to amplify the optical signal that these molecules produce, so that we can have access to them even if they are in very low concentrations. Our structures do precisely that: they amplify the light over large volumes, so that if the molecules to be detected are placed inside those volumes, they will more easily produce the noted optical signal (absorption, color change, etc.). This is thus a practical way of detecting diseases such as cancer.“In a different direction, light amplification is useful to produce a nonlinear response to the external light, and this can be directly applied to process information encoded as optical signals. This is an ambitious goal that is needed to fabricate optical computers. Such computers are still far from reachable, but they are expected to produce a tremendous increase in the speed of computation and communication. Our structures provide an innovative way of using light in devices for information processing.” Citation: Extraordinary light enhancement technique proposed for nanophotonic devices (2010, November 3) retrieved 18 August 2019 from https://phys.org/news/2010-11-extraordinary-technique-nanophotonic-devices.html Explore further More information: Rebecca Sainidou, et al. “Extraordinary All-Dielectric Light Enhancement over Large Volumes.” Nano Letters, ASAP. DOI: 10.1021/nl102270p Breakthrough in nano-optics: Researchers develop plasmonic amplifier (PhysOrg.com) — In a new study, scientists have shown that simply tailoring the nanoscale geometrical parameters of dielectric structures can result in an increase in the light intensity to unprecedented levels. Theoretically, they calculate that the light intensity could be increased to up to 100,000 times that of the incident intensity over large volumes. This large light enhancement could lead to new developments in all-optical switching and biosensing applications. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Copyright 2010 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.last_img read more

CONTINUE READING
zgrlbenf

Robotics team finds artificial fingerprints improve tactile abilities

first_img Explore further HIRO III lets you feel what you see on screen (w/ Video) Citation: Robotics team finds artificial fingerprints improve tactile abilities (2011, September 21) retrieved 18 August 2019 from https://phys.org/news/2011-09-robotics-team-artificial-fingerprints-tactile.html Schematic of the indentation process. (a) Flat surface being applied to the ridged skin cover. (b) Curved surface being applied to the ridged skin cover. Image credit: DOI:10.3390/s110908626 This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. More information: Artificial Skin Ridges Enhance Local Tactile Shape Discrimination, Saba Salehi, John-John Cabibihan, Shuzhi Sam Ge, arXiv:1109.3688v1 [physics.med-ph] DOI:10.3390/s110908626AbstractOne of the fundamental requirements for an artificial hand to successfully grasp and manipulate an object is to be able to distinguish different objects’ shapes and, more specifically, the objects’ surface curvatures. In this study, we investigate the possibility of enhancing the curvature detection of embedded tactile sensors by proposing a ridged fingertip structure, simulating human fingerprints. In addition, a curvature detection approach based on machine learning methods is proposed to provide the embedded sensors with the ability to discriminate the surface curvature of different objects. For this purpose, a set of experiments were carried out to collect tactile signals from a 2 times 2 tactile sensor array, then the signals were processed and used for learning algorithms. To achieve the best possible performance for our machine learning approach, three different learning algorithms of Na”ive Bayes (NB), Artificial Neural Networks (ANN), and Support Vector Machines (SVM) were implemented and compared for various parameters. Finally, the most accurate method was selected to evaluate the proposed skin structure in recognition of three different curvatures. The results showed an accuracy rate of 97.5% in surface curvature discrimination.via ArXiv Blog As with many areas of science, even the seemingly simple stuff turns out to be quite complicated on closer view. The human fingertip for example, covered with skin unlike that of any other body part, has raised ridges that allow people to feel the difference in texture between wood and metal or silk and linen. It can also detect temperature, and as it turns out, is also involved in figuring out the curvature of objects that are touched. Consider for example, the keys on a cell phone, or a television remote control. It’s these kinds of abilities that Saba Salehi, John-John Cabibihan and Shuzhi Sam Ge are trying to emulate in their lab in Singapore. To begin, they’ve started with the easiest of the bunch, trying to figure out if artificial fingerprints fitted on a robot hand can tell how roundish an object is.To find out they built a touch sensor comprised of a base plate, embedded sensors and a raised ridged surface; all on a 4mm square. They then set about testing the simple sensor in a variety ways to see if they were able to sense things with it in different ways, specifically as it was applied to flat, edged and curved objects. They also built an identical sensor except that the raised portion was flat instead of ridged, to serve as a control.They found that the raised sensor did indeed provide more feedback (resonance) information than the one with the flat surface, so much so that they were able to tell the difference in the three types of objects with 95.7% accuracy.Undoubtedly, more research will be done in this area by this group and others, and perhaps very soon, robot fingertips will become just as sensitive, if not more, than our own, leading to a whole new generation of gentler robots, able to perform tasks with both dexterity and a deft touch. © 2011 PhysOrg.com (PhysOrg.com) — Over the past couple of decades, many people in and out of the science community have watched the steady progress being made in robotics. It’s an exceptionally interesting field due to the anthropomorphic nature of the results. Each new step brings such machines closer to emulating us even as we look forward to the next step. One interesting thing about robotics is that certain areas seem to be advancing faster than others. Robot arms for example are old news, new research is focused more on hand movements. And has advances in hand movements have been made, more research has come to focus on finger movements and finally tactile sensations. Now new work by a trio of researches from the National University of Singapore describe in their paper published on the preprint server arXiv, how affixing artificial fingerprints to robot fingers can increase tactile “sensation” allowing such a robot to discern the differences in curvature of objects.last_img read more

CONTINUE READING
fctlhund

British ornithologists track cuckoo birds migration route

first_img Society warns cuckoo bird in danger of extinction The group not only tagged the birds with backpacks, which cost about £2,000 each, but also named them (Lyster, Chris, Clement, Martin and Kasper) and allowed others to track their journey via Google Maps. Unfortunately, only two of the group managed to survive the journey (Lyster and Chirs, though there is still some hope for Kasper) from Norfolk in England to the Congo and back, but the collars did provide a clear map of the migration routes of the birds, which oddly, were quite different for each bird, even as they all ended up in nearly the same place for their winter stay. Citation: British ornithologists track cuckoo birds migration route (2012, May 7) retrieved 18 August 2019 from https://phys.org/news/2012-05-british-ornithologists-track-cuckoo-birds.html All of the birds made it to the Congo, it was in coming back that they ran into trouble. To do so, they have to stop and fill up twice; once before crossing the Sahara desert, then again before crossing the Mediterranean Sea. And though the test was of just of one small group migrating once, BTO members are already hypothesizing that it’s possible the birds might be finding it more difficult to fill up properly before crossing the big hazards, than in years past, which would account for fewer of them surviving the trip north.One surprise the group found was that the cuckoos all veered slightly west, towards Cameroon, before heading due north, for the return trip, which shouldn’t have been a surprise after all, as that allows for traveling over the narrowest part of the desert. There are other hazards as well though, they found, one bird, Martin apparently met his demise after wandering into a violent hail storm.Despite the losses, the team views the study as a success. Much more is now known about the migration of the cuckoo bird and because of that, some efforts might begin to help more of the birds survive the round trip each year, thus preventing them from disappearing altogether. Lyster, pre-migration. Image: BTO More information: www.bto.org/science/migration/ … yster/finding-lystercenter_img (Phys.org) — Nowhere it seems, are bird watchers more enthusiastic than in Britain, where groups congregate to watch and discuss the most intimate details of their favorite fowl. Of consternation to such groups however is the decline of several favorite species, one of which is the cuckoo, which has seen a nearly fifty percent drop in numbers in just the past couple of decades. Making matters even more frustrating has been the lack of data on the birds which might offer clues as to why their numbers are dropping. Now, one group, the British Trust for Ornithology (BTO) has taken matters into its own hands by capturing and fitting five wild cuckoos with tiny radio backpacks to allow for tracking of the birds during their annual migration. The hope is that by tracking the birds to see where some die, efforts can be made to help them survive. © 2012 Phys.Org Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

CONTINUE READING
sveqcqvc

Panasonic tech fixes color setbacks in low light photos w video

first_imgNishiwaki made special note of something called Babinet-BPM: “We’ve developed a completely new analysis method, called Babinet-BPM. Compared with the usual FDTD method, the computation speed is 325 times higher, but it only consumes 1/16 of the memory. This is the result of a three-hour calculation by the FDTD method. We achieved the same result in just 36.9 seconds.”FDTD stands for finite-difference time-domain and BPM stands for beam propagation method. Both are numerical analysis techniques. Panasonic’s work is also described in Nature Photonics, in a study called “Efficient colour splitters for high-pixel-density image sensors.” The authors said, “We experimentally demonstrate that this principle of colour splitting based on near-field deflection can generate color images with minimal signal loss.” Citation: Panasonic tech fixes color setbacks in low light photos (w/ video) (2013, March 29) retrieved 18 August 2019 from https://phys.org/news/2013-03-panasonic-tech-setbacks-photos-video.html “Conventional color image sensors use a Bayer array [the arrangement of color filters used in imaging sensors in digital cameras, camcorders, and scanners to create a color image]. The filter pattern is 50 percent green, 25 percent red and 25 percent blue in which a red, green, or blue light-transmitting filter is placed above each sensor. These filters block 50 to 70 percent of the incoming light before it even reaches the sensor,” according to a Panasonic release. Seeing demand for higher-sensitivity cameras on the rise, Panasonic sought a new solution to enable sensors to capture “uniquely vivid” color images. In the video, Seiji Nishiwaki commented further: “Here, color filters aren’t used. So light can be captured without loss, which enables us to achieve approximately double the sensitivity.”Nishiwaki said Panasonic’s technology can be used on different types of sensors, whether CCD, CMOS, or BSI and can be in step with current semiconductor fabrication processes. He said the new approach would not require any special materials or processes.According to DigInfo TV: “The image sensor uses two types of color splitters: red deflectors and blue deflectors.The red and blue deflectors are arranged diagonally, with one of each for every four pixels. RGB values can be obtained by determining the intensity of light reaching each of the four pixels. For example, if white light enters each pixel, pixels where it doesn’t pass through a deflector receive unmodified white light. But in pixels with a red deflector, the light is split into red diffracted light and cyan non-diffracted light. And when white light passes through a blue deflector, it’s split into blue diffracted light and yellow non-diffracted light. As a result, the pixel arrangement is cyan, white + red, white + blue, and yellow. The RGB values are then calculated using a processing technique designed specifically for mixed color signals.” © 2013 Phys.org Panasonic develops a next-generation robust image sensor (Phys.org) —Panasonic’s new color filtering technology is in the news this week after a video from DigInfo TV presented what imaging experts at Panasonic have been up to, and that is using “micro color splitters,” which achieve twice the brightness than before possible. These micro color splitters replace a traditional filter array over the image sensor. The result from the new approach is especially relevant for those working with low light photography—situations wherever there is less than daytime light outside, or any indoor photography without much ambient light. The researchers found their new approach could almost double the brightness in photos taken in low light environments. Saying no to traditional color filters, the researchers wanted a technique where light is captured without any loss. More information: Nature Photonics paper: www.nature.com/nphoton/journal … photon.2012.345.htmlVia Diginfo.tv Journal information: Nature Photonics Explore further The problem has been that image sensors have produced color pictures by using red, green, and blue filters for each pixel, but with that system, 50 percent to 70 percent of the light is lost. The micro color splitters control the diffraction of light at a microscopic level. Panasonic’s imaging experts said that they achieved approximately double the color sensitivity in comparison with conventional sensors that use color filters. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

CONTINUE READING
obcteega

In quantum theory of cognition memories are created by the act of

first_img © 2014 Phys.org. All rights reserved. Venn diagram showing the relationship between the assumptions of cognitive realism and cognitive completeness, and their overlap, which defines classical cognitive models. Quantum models satisfy cognitive completeness but not cognitive realism, and a model in the class ‘X’ would satisfy cognitive realism but not cognitive completeness. Credit: Yearsley and Pothos. ©2014 The Royal Society More information: James M. Yearsley and Emmanuel M. Pothos. “Challenging the classical notion of time in cognition: a quantum perspective.” Proceedings of The Royal Society B. DOI: 10.1098/rspb.2013.3056 Journal information: Proceedings of the Royal Society B This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. “There are two lines of thought when it comes to using quantum theory to describe cognitive processes,” James M. Yearsley, a researcher in the Department of Psychology at City University London, told Phys.org. “The first is that some decision-making processes appear quantum because there are physical processes in the brain (at the level of neurons, etc.) that are quantum. This is very controversial and is a position held by only a minority. The second line of thought is that basic physical processes in the brain at the level of neurons are classical, and the (apparent) non-classical features of some human decision-making arises because of the complex way in which thoughts and feelings are related to basic brain processes. This is by far the more common viewpoint, and is the one we personally subscribe to.” Memory constructionIn their study, Yearsley and Emmanuel M. Pothos, also at City University London, have proposed that quantum probability theory may be used to assign probabilities to how precisely our thoughts, decisions, feelings, memories, and other cognitive variables can be recalled and defined over time. In this view, recalling a memory at one point in time interferes with how we remember perceiving that same memory in the past or how we will perceive it in the future, much in the way a measurement may change the outcome of something being measured. This act of recall is sometimes called “constructive” because it can change (or construct) the recalled thoughts. In this view, the memory itself is essentially created by the act of remembering.As Yearsley explains, the idea that measurements might be constructive in cognition can be understood with an example of chocolate cravings.”It’s a little bit like how you can be sitting at your desk happily working away until one colleague announces that they are popping out to the shop and would you like anything, at which point you are overcome with a desire for a Twix!” he said. “That desire wasn’t there before your colleague asked, it was created by that process of measurement. In quantum approaches to cognition, cognitive variables are represented in such a way that they don’t really have values (only potentialities) until you measure them. That’s a bit like saying as it gets towards lunchtime there is an increased potentiality for you to say you’d like a Twix if someone asks you, but if you’re hard at work you might still not be thinking consciously about food. Of course, this analogy isn’t perfect.” Explore further (Phys.org) —The way that thoughts and memories arise from the physical material in our brains is one of the most complex questions in modern science. One important question in this area is how individual thoughts and memories change over time. The classical, intuitive view is that every thought, or “cognitive variable,” that we’ve ever had can be assigned a specific, well-defined value at all times of our lives. But now psychologists are challenging that view by applying quantum probability theory to how memories change over time in our brains. Citation: In quantum theory of cognition, memories are created by the act of remembering (2014, March 17) retrieved 18 August 2019 from https://phys.org/news/2014-03-quantum-theory-cognition-memories.html Quantum communication scheme provides guaranteed security without quantum memories This quantum view of memory is related to the uncertainty principle in quantum mechanics, which places fundamental limits on how much knowledge we can gain about the world. When measuring certain kinds of unknown variables in physics, such as a particle’s position and momentum, the more precisely one variable can be determined, the less precisely the other can be determined. The same is true in the proposed quantum view of cognitive processes. In this case, thoughts are linked in our cognitive system over time, in much the same way that position and momentum are linked in physics. The cognitive version can be considered as a kind of entanglement in time. As a result, perfect knowledge of a cognitive variable at one point in time requires there to be some uncertainty about it at other times. Overturning classical assumptionsThe scientists explain that this proposal can be tested by performing experiments that try to violate the so-called temporal versions of the Bell inequalities. In physics, violation of the temporal Bell inequalities signifies the failure of classical physics to describe the physical world. In cognitive science, the violations would signify the failure of classical models of cognition that make two seemingly intuitive assumptions: cognitive realism and cognitive completeness.As the scientists explain, cognitive realism is the assumption that all of the decisions a person makes can be entirely determined by processes at the neurophysiological level (although identifying all of these processes would be extremely complicated). Cognitive completeness is the assumption that the cognitive state of a person making a decision can be entirely determined by the probabilities of the outcomes of the decision. In other words, observing a person’s behavior can allow an observer to fully determine that person’s underlying cognitive state, without the need to invoke neurophysiological variables.Neither of these assumptions is controversial; in fact, both are central to many kinds of cognitive models. A quantum model, however, does not rely on these assumptions.”I think the greatest significance of this work is that it succeeds in taking the widely held belief that cognitive variables such as judgments or beliefs always have well-defined values and gives us a way to put that intuition to experimental test,” Yearsley said. “Also, assuming we do find a violation of the temporal Bell inequalities experimentally, we would be ruling out not just a single model of cognition, but actually a very large class of models, so it’s potentially a very powerful result.”Interpreting a possible violation of a temporal Bell inequality is not straightforward, since one would have to decide which of the two assumptions—realism or completeness—should be abandoned. The researchers argue that for the purposes of creating models of cognition it makes more sense to assume that cognitive realism is not valid, thus rejecting the idea that decisions can thought of as being be fully determined by underlying neurophysiological processes. A key implication would be that an individual may not have a well-defined judgment at all points in time, which may offer insight into aspects of cognition which have so far resisted formal explanation. One such example is the creation of false memories. The scientists hope that future research will help clarify the role of quantum probability in cognitive modeling, and shed light on the complicated process that make up all of our memories, thoughts, and identities.last_img read more

CONTINUE READING
jgwcorbs

Bright lights small crystals Scientists use nanoparticles to capture images of single

first_img Dr. P. James Schuck discussed the paper that he, Dr. Bruce E. Cohen, Dr. Daniel J. Gargas, Dr. Emory M. Chan, and their co-authors published in Nature Nanotechnology, starting with the main challenges the scientists encountered in: developing luminescent probes with the photostability, brightness and continuous emission necessary for single-molecule microscopydeveloping sub-10 nm lanthanide-doped upconverting nanoparticles (UCNPs) an order of magnitude brighter under single-particle imaging conditions than existing compositions, lanthanides being transition metals with properties distinct from other elements”The most common emitters used for single-molecule imaging – organic dyes and quantum dots – have significant limitations that have proven extremely challenging to overcome,” Schuck tells Phys.org. He explains that organic dyes are generally the smallest probes (typically ~1 nm in size), and will randomly turn on and off. This “blinking” is quite problematic for single-molecule imaging, he continues, and typically after emitting roughly 1 million photons will always photobleach – that is, turn off permanently. “This may sound like a lot of photons at first,” Schuck says, “but this means that the dyes stop emitting after only about 1 to 10 seconds under most imaging conditions. UCNPs never blink.”Moreover, Schuck continues, it turns out the same problems exist for fluorescent quantum dots, or Qdots, as well. However, while it is possible to make Qdots that will not blink or photobleach, this usually requires the addition of layers to the Qdot, which makes them too large for many imaging applications. (A quantum dot is a semiconductor nanocrystal small enough to exhibit quantum mechanical properties.) “Our new UCNPs are small, and do not blink or bleach.”Due to these properties, he notes, UCNPs have recently generated significant interest because they have the potential to be ideal luminescent labels and probes for optical imaging – but the major roadblock to realizing their potential had been the inability to design sub-10 nm UCNPs bright enough to be imaged at the single-UCNP level. © 2014 Phys.org Schuck mentions another advantage of upconverting nanoparticles – namely, they operate by absorbing two or more infrared photons and emitting higher-energy visible light. “Since nearly all other materials do not upconvert, when imaging the UCNPs in a sample, there is almost no other autofluorescent background originating from the sample. This results into good imaging contrast and large signal-to-background levels.” In addition, while organic dyes and Qdots can also absorb IR light and emit higher-energy light via a nonlinear two+ photon absorption process, the excitation powers needed to generate measurable two-photon fluorescence signals in dyes and small Qdots is many orders of magnitude higher than is needed for generating upconverted luminescence from UCNPs. “These high powers are generally bad for samples and a big concern in bioimaging communities” Schuck emphasizes, “where they can lead to damage and cell death.” Schuck notes that two other key aspects central to the discoveries mentioned in the paper – using advanced single-particle characterization, and theoretical modeling – were a consequence of the multidisciplinary collaborative environment at the Foundry. “This study required us to combine single-molecule photophysics, the ability to synthesize ultrasmall upconverting nanocrystals of almost any composition, and the advanced modeling and simulation of UCNP optical properties,” he says. “Accurately simulating and modeling the photophysical behavior of these materials is challenging due to the large number of energy levels in these materials that all interact in complex ways, and Emory Chan has developed a unique model that objectively accounts for all of the over 10,000 manifold-to-manifold transitions in the allowed energy range.”Previously, Schuck says that the conventional wisdom for designing bright UCNPs had been to use a relatively small concentration of emitter ions in the nanoparticles, since too many emitters will lead to lower brightness due to self-quenching effects once the UCNP emitter concentration exceeds ~1%. “This turns out to be true if you want to make particles that are bright under ensemble imaging conditions – that is, where a relatively low excitation power is used – since you have many particles signaling collectively,” Schuck explains. “However, this breaks down under single-molecule imaging conditions.” In their paper, the researchers have demonstrated that under the higher excitation powers used for imaging single particles, the relevant energy levels become more saturated and self-quenching is reduced. “Therefore,” Schuck continues, “you want to include in your UCNPs as high a concentration of emitter ions as possible.” This results in the nanoparticles being almost non-luminescent at low-excitation-power ensemble conditions due to significant self-quenching, but ultra-bright under single-molecule imaging conditions. UCNP size-dependent luminescence intensity and heterogeneity. a, Deviation of single UCNP luminescence intensity normalized to particle volume from ideal volumetric scaling (n¼300 total). The curve represents calculated intensity normalized to volume for UCNPs with a nonluminescent surface layer of 1.7 nm. Only intensities from single, unaggregated nanocrystals, as determined by Supplementary Fig. 5, are used. The top inset shows a diagram representing an ideal nanocrystal in which with all included emitters are luminescent (green circles). The bottom inset is a diagram representing a nanocrystal with emitters that are nonluminescent (maroon circles) in an outer surface layer. b, Fine spectra of the green emission bands collected from four single 8 nm UCNPs (curves 1–4) and their averaged spectra (curve Sigma). Credit: Courtesy Daniel Gargas, Emory Chan, Bruce Cohen, and P. James Schuck, The Molecular Foundry, Lawrence Berkeley National Laboratory Experimental Setup for single UCNP optical characterization. A 980nm laser is prefocused with a 500mm lens before entering the back aperture of a 0.95 NA 100x Objective (Zeiss), which adjusts the focal plane of the laser closer to that of the visible luminescence (dashed line). Emitted light is collected back through the same objective, filtered by two 700nm short-pass filters and two 532nm long-pass filters (Chroma) to remove residual laser light, and focused onto a single photon counting APD (MPD) or routed to a LN-cooled CCD spectrometer (Princeton Instruments) with 1200 grooves/mm grating. A Time-Correlated Single Photon Counter (Picoquant) is used for luminescence lifetime measurements. All experiments were performed in ambient conditions at 106/cm2 unless otherwise noted. Power-dependent data and single particle line-cuts shown in Fig 4 were collected with a 1.4 NA 100x oil immersion objective (Nikon). Credit: Courtesy Daniel Gargas, Emory Chan, Bruce Cohen, and P. James Schuck, The Molecular Foundry, Lawrence Berkeley National Laboratory Another important implication of this finding, Schuck adds, is that it should change how people will screen for the best single-molecule luminescent probes in the future. “Until now,” he notes, “people would first look to see which probes were bright using ensemble-level conditions, then would investigate only that subset as possible single-molecule probes. Our new probes would, of course, have failed that screening test!” Schuck again emphasizes that “a key reason this discovery happened is that we have experts in all key areas in the same building, and we were able to quickly iterate through the theory-synthesis-characterization cycle.”Regarding future research directions, notes Schuck, the scientists are pursuing a few different avenues. “We’d certainly like to now use these newly-designed UCNPs for bioimaging….so far, we’ve only investigated the fundamental photophysical properties of these particles when they’re isolated on glass. We believe one exciting and important application will be their use in brain imaging – particularly for deep-tissue in vivo optical imaging of neurons and brain function. In closing, Schuck mentions other areas of research that might benefit from their study. “I think a primary application is in single-particle tracking within cells. For example,” he illustrates, “labeling specific proteins with individual UCNPs and tracking them to understand their cellular kinetics.” Along different lines, Schuck adds, it turns out that UCNPs are also excellent probes of very local electromagnetic fields. “This is because lanthanides have a rather unique set of photophysical properties such as relatively prevalent magnetic dipole emission, allowing us to probe optical magnetic fields, and very long lifetimes such that transitions are not strongly allowed, which allows us to more-easily probe cavity quantum optical effects such as the Purcell enhancement of emission. In fact, Schuck concludes, an experiment that uses UCNPs to report on the near-field strengths and field distributions surrounding nanoplasmonic devices is just underway.” When imaging at the single-molecule level, small irregularities known as heterogeneities become apparent – features that are lost in higher-scale, so-called ensemble imaging. At the same time, it has until recently been challenging to develop luminescent probes with the photostability, brightness and continuous emission necessary for single-molecule microscopy. Now, however, scientists in the Molecular Foundry at Lawrence Berkeley National Lab, Berkeley, CA have developed upconverting nanoparticles (UCNPs) under 10 nm in diameter whose brightness under single-particle imaging exceeds that of existing materials by over an order of magnitude. The researchers state that their findings make a range of applications possible, including cellular and in vivo imaging, as well as reporting on local electromagnetic near-field properties of complex nanostructures. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Explore further Bright future for protein nanoprobes Citation: Bright lights, small crystals: Scientists use nanoparticles to capture images of single molecules (2014, April 22) retrieved 18 August 2019 from https://phys.org/news/2014-04-bright-small-crystals-scientists-nanoparticles.html More information: Engineering bright sub-10-nm upconverting nanocrystals for single-molecule imaging, Nature Nanotechnology 9, 300–305 (2014), doi:10.1038/nnano.2014.29 Journal information: Nature Nanotechnology “This brings me to what is probably the most important takeaway from our work, which is the discovery and demonstration of new rules for designing ultrabright, ultrasmall UCNP single-molecule probes,” Schuck says. In addition, he stresses that these new rules contrast directly with conventional methods for creating bright UCNPs. “As we showed in our paper, we synthesized and imaged UCNPs as small as a single fluorescent protein! For many bioimaging applications, very small – certainly smaller than 10nm – luminescent probes are required because you really need the label or probe to perturb the system they are probing as little as possible.” Luminescence of UCNPs. a, Schematic of energy transfer upconversion with Yb3+ as sensitizer and Er3+ as emitter. b, Minimum peak excitation intensities of NIR light needed for multiphoton single-molecule imaging of various classes of luminescent probes. The peak excitation intensity ranges shown are required to detect signals of 100 c.p.s. Credit: Courtesy Daniel Gargas, Emory Chan, Bruce Cohen, and P. James Schuck, The Molecular Foundry, Lawrence Berkeley National Laboratorylast_img read more

CONTINUE READING
fjvevwlj

Massive exoplanet discovered using gravitational microlensing method

first_img Explore further © 2017 Phys.org Citation: Massive exoplanet discovered using gravitational microlensing method (2017, April 18) retrieved 18 August 2019 from https://phys.org/news/2017-04-massive-exoplanet-gravitational-microlensing-method.html Astronomers discover new substellar companion using microlensing The light curve data for MOA-2016-BLG-227 is plotted with the best-fit model. The top panel shows the whole event, the bottom left and bottom right panels highlight the caustic crossing feature and the second bump due to the cusp approach, respectively. The residuals from the model are shown in the bottom insets of the bottom panels. Credit: Koshimoto et al., 2017. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Gravitational microlensing is an invaluable method of detecting new extrasolar planets circling their parent stars relatively closely. This technique is sensitive to low-mass planets orbiting beyond the so-called “snow line” around relatively faint host stars like M dwarfs or brown dwarfs. Such planets are of special interest for astronomers, as just beyond this line, the most active planet formation occurs. Hence, understanding the distribution of exoplanets in this region could offer important clues to how planets form.The microlensing event MOA-2016-BLG-227 was detected on May 5, 2016 by the Microlensing Observations in Astrophysics (MOA) group using the 1.8 m MOA-II telescope at the University of Canterbury Mt. John Observatory in New Zealand. Afterward, this event was the target of follow-up observations employing three telescopes located on Mauna Kea, Hawaii: the United Kingdom Infra-Red Telescope (UKIRT) 3.8m telescope, the Canada France Hawaii Telescope (CFHT) and the Keck II telescope. VLT Survey Telescope (VST) at ESO’s Paranal Observatory in Chile and the Jay Baum Rich 0.71m Telescope (C28) at the Wise Observatory in Israel were also used for these observations.This subsequent observational campaign allowed the research team led by Naoki Koshimoto of the Osaka University in Japan to detect the new planet and to determine its basic parameters.”The event and planetary signal were discovered by the MOA collaboration, but much of the planetary signal is covered by the Wise, UKIRT, CFHT and VST telescopes, which were observing the event as part of the K2 C9 program (Campaign 9 of the Kepler telescope’s prolonged mission),” the paper reads.The team found that MOA-2016-BLG-227Lb is a super-Jupiter planet with the mass of about 2.8 Jupiter masses. The parent star is most probably an M or K dwarf located in the galactic bulge. The mass of the star is estimated to be around 0.29 solar masses. MOA-2016-BLG-227Lb orbits its host at a distance of approximately 1.67 AU. Other main parameters like the radius of both objects and orbital period of the planet are yet to be determined.”Our analysis excludes the possibility that the host star is a G-dwarf, leading us to a robust conclusion that the planet MOA-2016-BLG-227Lb is a super-Jupiter mass planet orbiting an M or K-dwarf star likely located in the Galactic bulge,” the researchers concluded.The authors call for further investigation of the MOA-2016-BLG-227 event, which could deliver essential more detailed information about the newly found planetary system. They noted that this event should be revisited with the Hubble Space Telescope (HST) and Keck adaptive optics (AO) system. Promising results could also come from future space and ground based telescopes like the James Webb Space Telescope (JWST), the Giant Magellan Telescope (GMT), the Thirty Meter Telescope and the Extremely Large Telescope (ELT). (Phys.org)—Astronomers have found a new massive alien world using the gravitational microlensing technique. The newly detected exoplanet, designated MOA-2016-BLG-227Lb, is about three times more massive than Jupiter and orbits a distant star approximately 21,000 light years away. The finding was published Apr. 6 in a paper on arXiv.org. More information: MOA-2016-BLG-227Lb: A Massive Planet Characterized by Combining Lightcurve Analysis and Keck AO Imaging, arXiv:1704.01724 [astro-ph.EP] arxiv.org/abs/1704.01724AbstractWe report the discovery of a microlensing planet —- MOA-2016-BLG-227Lb —- with a massive planet/host mass ratio of q≃9×10−3. This event was fortunately observed by several telescopes as the event location was very close to the area of the sky surveyed by Campaign 9 of the K2 Mission. Consequently, the planetary deviation is well covered and allows a full characterization of the lensing system. High angular resolution images by the Keck telescope show excess flux other than the source flux at the target position, and this excess flux could originate from the lens star. We combined the excess flux and the observed angular Einstein radius in a Bayesian analysis which considers various possible origins of the measured excess flux as priors, in addition to a standard Galactic model. Our analysis indicates that it is unlikely that a large fraction of the excess flux comes from the lens. We compare the results of the Bayesian analysis using different priors for the probability of hosting planets with respect to host mass and find the planet is likely a gas-giant around an M/K dwarf likely located in the Galactic bulge. This is the first application of a Bayesian analysis considering several different contamination scenarios for a newly discovered event. Our approach for considering different contamination scenarios is crucial for all microlensing events which have evidence for excess flux irrespective of the quality of observation conditions, such as seeing, for example.last_img read more

CONTINUE READING
ykmplzsv

Bengal tops employment generation in rural areas

first_imgKolkata: Bengal has emerged as the number one state in the country in terms of employment generation in the rural areas, Chief Minister Mamata Banerjee said on Tuesday. The state has topped the list by generating 30.98 crore person-days till March 31 under the Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA).The Chief Minister tweeted: “I am very happy to share with all of you that Bengal has emerged No.1 in the country in rural employment generation. Under 100 days work scheme, as on 31 March 2018, Bengal has generated 30.98 crore person-days, which is the highest in the country.” Also Read – Heavy rain hits traffic, flightsExplaining the expenditure that has been carried out for the project in the last fiscal, she maintained in the tweet: “Moreover, West Bengal reported the expenditure of Rs 8007.56 crore under this scheme in 2017-18, which is again the highest in the country. In terms of average person-days per household, West Bengal with 59 days in 2017-18, is the best performer among the major states.”As per the data of the state Panchayats and Rural Development department, the Bengal government has not only become successful in scoring the first rank in the country in terms of job creation but it has even crossed the target set by the Centre for 2017-18 Financial Year. Also Read – Speeding Jaguar crashes into Merc, 2 B’deshi bystanders killedThe Centre had set a target of 23 crore person-days for Bengal in the 2017-18 fiscal. The state government had become successful in creating 24 crore person-days by the end of December 2017 itself.In the last three months of 2017-18 fiscal, the state government has also become successful in creating another 6.98 crore person-days jobs and the figure total reached to 30.98 till March 31.It may also be mentioned that the state government had created 21 crore person-days in 2016-17 Financial Year while in the 2017-18 fiscal it has reached to 30.98 crore that marked an increase by around 43 percent. This comes at the time when the state Panchayats and Rural development department has set a target of creating 25 crore person-days in 2018-19.There was also a meeting in this connection between senior officials of the state Panchayats and Rural Development department, Gorkhaland Territorial Administration (GTA), Siliguri Mahakuma Parishads and authorities from all districts. It would also ensure the livelihood of around 10 lakh families which would essentially mean that every Gram Panchayat will have to provide jobs to around 300 families.last_img read more

CONTINUE READING
kqbdosxj

Coal mine auctions Centre gets 176 bids for 23 blocks

first_imgWhile the exact value of the bids could not be ascertained, industry estimates suggest these blocks may fetch tens of thousands of crores of rupees in revenue. The government officials said the response have been as per expectations and most of the major players from private sector are participating in the auction, which was necessitated after cancellation of earlier coal block awards by the Supreme Court. “176 technical bids have been received for 23 coal blocks under Schedule II which were put on auction for private sector. The response has been as per our expectation and most of the major players have participated in it,” Coal Secretary Anil Swarup told reporters here after the technical bids were opened. Swarup, however, did not reveal the names of the players who participated. Also Read – I-T issues 17-point checklist to trace unaccounted DeMO cashSources said the bidders include JSPL and GVK Power among others. A spokesperson for Monnet Ispat confirmed that the company has submitted its technical bid. “Auction process will be web cast to maintain transparency,” Swarup said after the technical bids were opened today in the presence of bidders. The Supreme Court had in September 2014 cancelled allotment of 204 mines since 1993 terming the allotment as “arbitrary and illegal”. Coal Minister Piyush Goyal had earlier said that as per conservatives estimates, eastern states which house most pf these mines stand to gain about Rs 7 lakh crore from auction and royalties over the next 30 years. A maximum number of 16 bids were received for Gare Palma IV/7 block which is in Chhattisgarh and was previously alloted to Sarda Energy & Mineral Ltd. Twelve bids each were received for Tokisud North block in Jharkhand, Bicharpur mine in Madhya Pradesh and Gare Palma IV/4 in Chhattisgarh. Also Read – Lanka launches ambitious tourism programme to woo Indian touristsTokisud North was previously allocated to GVK Power (Govindwal Sahib) while Bicharpur was earlier given to Madhya Pradesh State Mining Corporation. Gare Palma IV/4 was previously allotted to Jayaswal Neco. The tender process for the 23 coal mines under schedule II was started on December 27 last year and “bidders submitted their technical bids on MSTC portal created for the purpose,” the government said. “The electronic bids were decrypted and opened electronically in the presence of bidders. Entire process was displayed on the screen for the bidders. Subsequently, sealed envelopes containing bank guarantee, power of attorney and affidavit were also opened in the presence of bidders,” an official statement said. These bids will be evaluated by a multi-disciplinary technical evaluation committee to shortlist bidders for participation in the electronic auction to be conducted on MSTC portal from February 14, it added. The blocks put on offer include Ardhagram, Belgaon, Bicharpur, Chotia, Gare-Palma-IV/1, Gare-Palma-IV/4, Gare-Palma-IV/5, Gare-Palma-IV/7,Gotitoria (East) & (West), Kathautia, Mandla North, Marki Mangli-I, Marki Mangli-III, Parbatpur Central, Sial Ghoghri, Amelia (North), Gare-Palma-IV/2 & IV/3, Sarisatolli, Talabira-I, Tokisud North and Trans Damodar.last_img read more

CONTINUE READING
gtfwxyto

Jadavpur University to collaborate with central research institutes

first_imgKolkata: Jadavpur University has been identified by the union government as the state’s only university and one of the 10 universities in the country to develop synergy between the varsity and centrally funded research institutes.”This will enable us to have access to infrastructure facilities of central research institutes located in and around Kolkata. It will enable us to create opportunities of joint collaborative research with these central institutes and at the same time, we will be able to carry out joint research projects and joint supervision of PhD dissertations by the faculty members of JU and these research institutes,” JU Vice-Chancellor Suranjan Das said. Also Read – Heavy rain hits traffic, flightsAccording to a senior JU official, the university teachers couldn’t avail the excellent infrastructure facilities in the research institutes. “Now, with this distinction, the standard of research will surely get a big boost,” he added.According to a senior official of the varsity, JU is probably the only varsity in eastern India to avail these facilities.The central institutes with whom JU can develop synergy are Saha Institute of Nuclear Physics, Central Glass and Ceramic Research Institute, National Institute of Biomedical Genomics, Indian Institute of Chemical Biology, SN Bose Institute, Bose Institute, Indian Association of Cultivation of Science and Variable Energy Cyclotron Centre. The joint faculty council of Science and Technology as well as the executive council of JU have already accepted the Centre’s proposal in principle and have resolved that Boards of Studies will discuss the modalities for implementation of the scheme. The Union ministry has also given the nod to JU to take on board any other experts as desired from universities and national laboratories in proximity.last_img read more

CONTINUE READING
kgqczfoh

Elderly woman dies after falling from multistoreyed building in Ballygunge

first_imgKolkata: An elderly woman died after falling down from the roof of a multi-storeyed apartment at Ballygunge on Thursday morning. The deceased, identified as Jamini Desai (64), was a resident of Earle Street who had been living in the apartment for the last five years.The police are investigating to ascertain whether the fall was accidental or suicidal, with the local residents alleging that her family members were late in taking the lady to the hospital even after she fell from the roof. Also Read – Rain batters Kolkata, cripples normal life”We were having a cup of tea in the tea stall adjacent to the apartment. Suddenly, we heard the noise of something falling on the ground. We found that the elderly woman was lying in a pool of blood. When we raised an alarm and called her family members, they came down and took her inside the house instead of rushing to the hospital. They did so after ten minutes,” said local resident Budheshwar Das.Das added that the woman was alive after falling from the roof and rushing to the hospital might have saved her. Also Read – Speeding Jaguar crashes into Mercedes car in Kolkata, 2 pedestrians killedThe sexagenarian was, however, declared brought dead at a nearby hospital.According to police sources, the victim had gone to the roof of the apartment at around 7 am in the morning for plucking flowers. She loved flowers and there were a number of flower pots with a variety of flowers blossoming on the rooftop garden, which she herself used to water.The woman was accompanied by the domestic help of the Desai family, Prem Mondal. However, when she told him to fetch the ladder, he came downstairs. It was at that time that the woman fell down.Police have come to know that the victim had been suffering from lung ailment for quite some time. Her family members have informed police that she was also suffering from mental depression due to her ill health.”We are exploring all angles,” an officer of Ballygunge police station said.last_img read more

CONTINUE READING
jzitvuln

KMC chalks out plan to develop Hatgachia model bustee as an ideal

first_imgKolkata: The Kolkata Municipal Corporation (KMC) has made up a comprehensive plan to develop its model bustee at Hatgachia in Dhapa area as an ‘ideal’ one. The civic body will give a fresh coat of paint to the houses in the slum situated adjacent to the road and will then adorn it with paintings of freedom fighters and other great men who has made the state proud. The paintings will also have short messages of the famous personalities.”Our aim is to curb the tendency of littering the walls of the houses in the slum. When there will be such paintings, people will hesitate to spit or make the walls dirty. This will prevent visual pollution which is a common sight, particularly in the slums,” a senior official of the Bustee Development department of KMC said. Also Read – Rain batters Kolkata, cripples normal life”The slum dwellers, particularly the young ones, lack moral values. And for developing a model slum, it is important to instill such values within them. We are hopeful that this act will be a step in that direction,” said Swapan Samaddar, Member, Mayor-in-Council (Bustee Development). It may be mentioned that Hatgachia bustee has already been developed as a model slum and finishing touches are now being given. The slum has a children’s park, community hall, primary school, health centre and playground. The department has also taken up initiatives for social awareness to uplift the lifestyle of slum dwellers. Also Read – Speeding Jaguar crashes into Mercedes car in Kolkata, 2 pedestrians killed”We are holding camps in the bustees where officials from our department will interact with the slum dwellers. The programme will be taken up in all the 16 boroughs of KMC, with the aim to educate them about cleanliness, hygiene and similar other practices,” the official added. Celebrities from the field of sports, entertainment and literature are being roped in for conducting such programmes successfully. It may be mentioned that KMC has planned to develop nine slums in the city into ‘model slums’, to ensure that people in these areas get the basic civic amenities. “The basic aim is improvement in the socio-economic condition of these slums. The concept of model bustee is to ensure basic civic amenities like drinking water, drainage system, toilets and lighting facilities at every house. However, we will create additional infrastructure wherever there is availability of space,” Samaddar said.last_img read more

CONTINUE READING
pjhhgrso

Bangladesh High Commission pays homage to martyrs

first_imgThe Bangladesh High Commission celebrated the 45th anniversary of the Victory Day recently in the national Capital paying homage to the martyrs of the 1971 War of Liberation and Father of the Nation Bangabandhu Sheikh Mujibur Rahman under whose leadership Bangladesh gained independence after a nine-month war against Pakistan.The day’s events were marked by hoisting their national flag at the chancery compound and reading out of Victory Day messages from President Md Abdul Hamid, Prime Minister Sheikh Hasina, Foreign Minister Abul Hassan Mahmood Ali MP and State Minister for Foreign Affairs  Md Shahriar Alam MP, offering special prayer seeking divine blessings for the martyrs of the Liberation War and Bangabandhu and his family members killed along with him on the tragic night of August 15 in 1975. Also Read – Add new books to your shelfActing High Commissioner Salahuddin Noman Chowdhury, accompanied by officers at the mission, hoisted the national flag as the national anthem was played. A cultural function in the evening, presented by students and teachers of Dhaka’s Govt Music College, at the Maitree Hall of the mission enthralled the audience, which included Housing and Public Works Minister Mosharraf Hossain, Disaster Management and Relief Minister Mofazzal Hossain Chowdhury Maya, war veterans, foreign diplomats, academics, journalists and family members of the mission’s officers and officials. Also Read – Over 2 hours screen time daily will make your kids impulsiveMoshararraf Hossain and Maya, both valiant freedom fighters, described about the heroic Mukti Bahini’s fight against the Pakistani military, equipped with most modern and sophisticated war machines.Welcoming the guests Salahuddin Noman fondly remembered the supreme sacrifices the freedom fighters had made at the call of Bangabandhu Sheikh Mujib against the brutal Pakistani soldiers who carried out one of the history’s worst genocide killing three million unarmed civilians. He said Bangladesh remains grateful to India and its citizens for extending military, economic, diplomatic and moral support to Bangladesh’s War of Liberation. Officers of the mission also attended a seminar on Bangladesh liberation war organised by India Foundation in association with War Veterans Association, Babu Jagjivan Ram Trust and Nehru Memorial Trust at Nehru Memorial Museum in the morning.Mofazzal Hossain Maya and Bir Bikram spoke at the seminar as chief guests, while Indian State Minister for External Affairs V K Singh presided as the special guest.last_img read more

CONTINUE READING
zgrlbenf

A blend of strings and steps

first_imgThe sixth edition of the strings and steps festival organised by noted Kathak dancer Sangeeta Majumder and Hawaiian guitar player Neel Ranjan Mukherjee got off to a flying start in the Capital. One of the highlights of the festival was the jugalbandhi between Sangeeta and Neel Ranjan; a first between a stringed instrument and dance, paving way for a wonderfully executed festival. The Strings and Steps Festival this year was a two day event that featured several musicians and some leading dancers. The genesis of the name of the festival has a story of its own. Sangeeta Majumder named this festival as ‘Strings N Steps’ after her ‘Kathak-Hawaiian Guitar amalgamation concept.’ In its 2017 edition, the country in focus was Canada and featured performances by eminent artists from the faraway land. Besides the jugalbandi, day one of the festival featured Pt Dev Bansraj on vocals and Dave Bansraj on Tabla and Sitar recital by Pt Prateek Chaudhuri, Ud Akram Khan on Tabla. Also Read – Add new books to your shelfSecond day of the festival saw delightful performances including that of doyen of Bharatnatyam Padmabhushan Saroja Vaidyanathan who gave an impressive lecture demonstration on the concept of Ashtanayika. The Ashta-Nayika is a collective name for eight types of nayikas or heroines as classified by Bharata in his Sanskrit treatise on performing arts – Natya Shastra. The other dance performance of the evening was Odissi dance recital by young dancer Gaurie Dwivedi who performed a piece of Devi Stuti. The item choreographed by Guru Bichitrananda Swain gave vivid description of the Goddess who as the Mother is the creator and nurtures and also slays demons Mahishasura. A lecture demonstration session on ‘Rasanubhuti’ by Prof Anupam Mahajan, successfully explained the fine nuances of Hindustani music to the audience.last_img read more

CONTINUE READING
jgwcorbs

Eating disorders figure skatings dirty little secret

first_imgFour years ago, Russia’s Yulia Lipnitskaya had the world at her 15-year-old feet, before the ground crumbled beneath them. The figure skater had emerged as one of the golden athletes of the Sochi 2014 Winter Olympics, enchanting her audience to win the team title – and a very public bear hug from Russian President Vladimir Putin. But during the Pyeongchang Winter Games, Lipnitskaya is back home in Russia, her skates gathering dust, after she retired in September due to health issues related to anorexia. Eating disorders have been described as “skating’s dirty little secret” – partly because sufferers usually strive to keep their problem well hidden. Also Read – Add new books to your shelf Canada’s Gabrielle Daleman said she received messages from many other skaters when she opened up about her own problems. “I don’t know if it’s the sport or if it’s just the way people look at themselves,” the 20-year-old told AFP.”I just wasn’t happy with the way I was looked, from being bullied (at school) and everything, I was getting criticised. I tried to look a certain way what people wanted me to look, rather than just feel good about myself. So that’s something I’ve learned over the past year.” Daleman, who finished 15th in the women’s singles in Pyeongchang, said her eating disorder started at school, where she was bullied over a learning disability. “I got a lot of messages from other skaters. Ashley Cain messaged me, Gracie (Gold) messaged me, Evgenia Medvedeva messaged me.”A lot of skaters came up saying thank you for sharing the story, because not a lot of people are open with it.” One of those she mentioned, Gracie Gold, a two-time American champion, pulled out of this Olympic season, citing an eating disorder, depression and anxiety.In 2005, an eating disorder forced American skater Jenny Kirk to end her promising skating career in the run-up to US team selection for the 2006 Turin Games. – ‘I’m not ashamed’ – In Lipnitskaya’s case, anorexia was ruining her life. She sought help in a clinic in Israel inw January, before taking the momentous decision to call time on her skating career at the age of just 19.She has spoken candidly about her struggles, saying as an inherently shy girl she’d found it hard to cope with her sudden fame. “Ever since childhood I’ve been a very strong introvert,” she said in an interview with the Russian skating federation. “Speaking with an unfamiliar person meant I had to make a real effort.””Anorexia is a 21st-century illness and it’s fairly common. Unfortunately, not everyone can cope with it. It was really hard to take the decision to quit… but I talked to my mother and we decided to start a new life.” Skating is tough on a developing body: hours and hours of draining, repetitive training, jumping higher, spinning faster.It is also an aesthetic sport where competitors typically wear skimpy or figure-hugging outfits – and where the spotlight can be cruel, shining on some but not others. Skating’s ruling authority the ISU, when contacted about eating disorders, referred AFP to the medical guidelines on its website, which say skaters and coaches are educated about health, nutrition and injury.”Any reported case affecting the health of an athlete is taken very seriously by the ISU Medical Commission,” the ISU guidelines say. For Daleman, meanwhile, there is an enormous sense of pride over speaking out about her story and making it to her second Olympics.last_img read more

CONTINUE READING
gclbnbol

For Sale Baby Shoes Never Worn – Hemingway and the Saddest SixWord

first_img“For sale, Baby shoes, Never Worn.” The legends surrounding Ernest Hemingway can be as interesting as the plots of his novels. The author was famous for his personal life, but some of “Papa’s” adventures verge into apocrypha.The above “story” is so short that it could fit into a title yet it resonates with depth and tragedy. This particular quote, supposedly originating in the 1920s, served as confirmation of Hemingway’s extraordinary talent and wit. It has also influenced numerous attempts to create a story in the six words frame, so-called flash or sudden fiction, giving only a glimpse of a story but in that glimpse delivering so much more.This particular story was believed to be written on a napkin in Luchow’s restaurant in Manhattan, while another version of the story claims that it was composed in the Algonquin Hotel, where a circle of New York intellectuals (known as the Algonquin Round Table) enjoyed discussions–and drinking–during lunch time. This is only the first of the discrepancies that surround this legend, for no one can say with certainty where it happened, or when, and witnesses are elusive.Ernest HemingwayAllegedly, the story was a result of a $10 bet among Hemingway and several writers at a lunch spiced with wordplay. Hemingway asked each of his colleagues to place a $10 wager, and in return, he would match it. His task was to create this shortest of stories. The only problem is, Hemingway probably never wrote it.Or if he did, the story wasn’t entirely his invention. Similar “ads” have been recorded as early as 1906. According to the Quote Investigator, an earlier version of this minimal sentence was “For Sale, Baby Carriage, Never Used,” published in a newspaper section called Terse Tales of the Town.Whether this was a bad joke or someone’s sad memory, we will never know. There are also two other versions written years before the supposed Hemingway bet was placed. One of them was an essay by William R. Kane, about a certain “wife who has lost her baby.” It was published in 1917, by the title “Little Shoes, Never Worn.”The “carriage” version was repeated in 1921, appearing in a column written by Roy K. Moulton, which held an ad attributed to an anonymous real-life character simply called Jerry:There was an ad in the Brooklyn “Home Talk” which read, “Baby carriage for sale, never used.” Would that make a wonderful plot for the movies?A six-word “novel” regarding a pair of baby shoes is considered an extreme example of flash fiction. Author: JD Hancock CC BY 2.0Let’s just assume that Ernest Hemingway was aware of these earlier versions and decided to cheat his way into winning a bet. After all, he very well acquainted with the world of reporters, journalists, and feuilleton writers. He could have known about the six-word punchline long before and could have used his wit to collect the wager and establish himself winner among potential rivals.But, truth be told, there is no evidence that such a bet actually ever took place, nor that Hemingway ever used this intelligent quote to soften the hearts of sardonic writers.Hemingway in uniform in Milan, 1918. He drove ambulances for two months until wounded.The story can be traced to a book published in 1991, titled Get Published! Get Produced! A Literary Agent’s Tips on How to Sell Your Writing. The book was written by a literary agent, Peter Miller, who mentions the anecdote about the bet placed in Luchow’s, as he heard it from an older newspaper account:Apparently, Ernest Hemingway was lunching at Luchow’s with a number of writers and claimed that he could write a short story that was only six words long. Of course, the other writers balked. Hemingway told each of them to put ten dollars in the middle of the table; if he was wrong, he said, he’d match it. If he was right, he would keep the entire pot. He quickly wrote six words down on a napkin and passed it around; Papa won the bet. The words were “FOR SALE, BABY SHOES, NEVER WORN.” A beginning, a middle and an end!There have been several more accounts over the years that attributed the quote to Hemingway, including an article by Arthur C. Clarke in a 1998 Reader’s Digest essay. Miller mentioned it again in a 2006 book, cementing it as Hemingway’s brainchild.Read another story from us: After his father committed suicide, Ernest Hemingway wrote: “I’ll probably go the same way”It was not until 2012 that a true academic investigation took place in order to solve the mystery of the never worn shoes. The Journal of Popular Culture published an article written by Frederick A. Wright that examined the origins of this story and debunked it as false.Many were probably disappointed to hear that the Nobel Prize-winning author wasn’t the man behind the heart-breaking story that he allegedly claimed was his best work. Nevertheless, the truth reveals something else―this false authorship only helped the story reach and inspire numerous people. These six words found their way into readers’ lives, and in the end, it doesn’t matter who wrote them. What matters is the impact it left on the way we perceive literature, as a compression of feelings that communicate directly with the human soul.last_img read more

CONTINUE READING
dyekoypw

The Year Without a Summer

first_imgToday, seemingly more than ever before, climate change is a hot topic of almost any political and social debate. Is our modern lifestyle artificially interfering with the climate or is it a natural occurrence independent from human activity? The debate rages on. However, climate abnormalities are not something humanity is encountering for the first time. A historical episode that shows another dark environmental period of the Earth is the year 1816, known in history as the “Year Without a Summer”, “Poverty Year” or “Eighteen Hundred and Froze to Death.”According to USA Today, this extremely harsh year caused average global temperatures to decrease by 32-33 °F (0.4–0.7 °C). It brought snow in the middle of June followed by a freezing winter in July and August. This extraordinary weather change destroyed crops and the food supply became so scarce that countless people in North America and Europe suffered a great famine. In fact the Year Without a Summer is the sixth-deadliest disaster in Great Britain and Ireland by death toll (65,000).1816 summer temperature anomaly compared to average temperatures from 1971–2000. Photo by Giorgiogp2 CC BY-SA 3.0One of the claimed scientific reasons for this climate anomaly was the biggest volcanic eruption in history which took place in Indonesia on Mount Tambora in 1815. The aftermath of this event resulted in large quantities of dust and ash leaking into the atmosphere causing a serious change decrease in temperatures.The resulting famine spread deadly diseases far and wide so people were forced to move away from their homes.Experts and scientists claim that this black scenario could possibly happen again due to the fact that volcanoes still erupt and no one can be certain when the next big eruption is going to happen.The 1815 Mount Tambora eruption. The red areas are maps of the thickness of volcanic ashfall.  CC BY-SA 3.0Considering this, any big eruption may prove to be far more fatal than any man-made ecological catastrophes.The tragic events of the year 1816 were also sealed in the pages of the book “The Year without Summer” by William B. Klingaman and co-author Nicholas P. Klingaman. The latter stated at the time that humanity is still unable to predict volcano eruptions and their destructive potential can only increase.The yellow skies typical of summer 1815 had a profound impact on the paintings of J.M.W. TurnerUSA Today reports that according to Klingman’s book, the eruption of Tambora is “by far the deadliest volcanic eruption in human history, with a death toll of at least 71,000 people, 12,000 of whom were killed directly by the eruption. And this doesn’t take into account the indirect deaths caused by the resulting famine.The volcano spewed out enough ash and pumice to cover a square area of 100 miles on each side with a depth of almost 12 feet. NASA also confirms that an eruption can cool a particular area and spread sulfur dioxide into the stratosphere where it then forms sulfate aerosols due to its reaction with water vapor.Turner’s classic sunset paintings were inspired by dust from volcanic eruptions including at Mount Tambora. This is the “Chichester Canal” (1828)The aerosols are highly durable and cool the surface of the Earth by reflecting sunlight.The heavy June snowstorms that year not only killed most of the crops but also froze many birds to death as well as other animals. Two months later, the freeze in August hit even harder, forcing people to survive in dreadful ways by eating pigeons, raccoons and other unsavory snacks.When The Arctic Warms, Extreme US Weather Is More FrequentThe Year Without Summer transformed many of Europe’s communities into impoverished crowds who, on top of that, had to fight a typhus epidemic.Dark storm clouds on a deserted dirt road. Many people had to flee their homes due to the ravages caused by the summerless year.After reading this unpleasant chapter in Earth’s natural history, one would naturally wonder when humanity can expect a return of this cruel climate episode. According to Klingaman, though eruptions like Tambora happen once every 1,000 years, smaller eruptions aren’t less of a problem.For example, the 1991 Pinatubo eruption cooled the Earth’s surface by nearly 34 °F (1 °C).1991 Mount Pinatubo eruptionTaking into account that today’s global temperatures are steadily increasing, it is understandable that a huge eruption could result in a network of disasters. USA Today adds that if it were to ever happen, it would be temporary and the warming would take up to several years to reappear again.Read another story from us: UK Heatwave Reveals Ancient Archaeological Sites Throughout BritainInterestingly, The Year without Summer had one positive effect. It inspired the British painter J.M.W Turner who painted breathtaking landscapes of the sunset after the Tambora eruption. The painting is named “The Lake, Petworth: Sunset, Fighting Bucks” and according to the Daily Mail it was painted many years after Tambora, presenting volcanic ash and gas in the sky under the warm colors of the sunset.last_img read more

CONTINUE READING
cpjdgwbe

The Oldest Weapons Ever Used by Humans in North America Found in

first_imgSpear points discovered at the Buttermilk Creek archaeological site in Texas have made archaeologists question their assumptions on when humans migrated to North America. The weapons are dated to between 13,500 and 15,500 years ago and are believed to be the oldest ever found in what is now the United States.Researchers at Texas A&M found the spear points, which are about three to four inches long, about 40 miles northwest of Austin.The weapons could have been used to hunt mastodons and mammoths.Photo courtesy Texas A&M UniversityWhat is causing a stir is the weapons were found in layers beneath Clovis spear points, and the Clovis were believed to be the first humans to come to North America. Now scientists say that may no longer necessarily be the case.“There is no doubt these weapons were used for hunting game in the area at that time,” said Michael Waters, distinguished professor of anthropology and director of the Center for the Study of the First Americans at Texas A&M.The discovery is significant because almost all pre-Clovis sites have stone tools, but spear points have yet to be found.Clovis points from the Rummells-Maske Cache Site, Iowa. Photo by Billwhittaker CC BY-SA 3.0“These points were found under a layer with Clovis and Folsom projectile points. Clovis is dated to 13,000 to 12,700 years ago and Folsom after that. The dream has always been to find diagnostic artifacts – such as projectile points – that can be recognized as older than Clovis and this is what we have at the Friedkin site.”The spear points were discovered under several feet of sediment.A Clovis projectile point created using bifacial percussion flaking (that is, each face is flaked on both edges alternately with a percussor). Photo by Locutus BorgThe Clovis people invented the “Clovis point,” a spear-shaped weapon made of stone that is found in Texas and other parts of the United States and northern Mexico.These weapons were made to hunt animals, including mammoths and mastodons, from 13,000 to 12,700 years ago, according to Science Daily.How and when the first people arrived in North America is debated. It is thought people migrated across the Bering Land Bridge, which once linked Siberia and Canada, around 20,000 years ago. Scientists believed the first Americans made it south of the continental ice sheets about 16,000 years ago and slowly spread out.Restoration of an American mastodon. Photo by Sergiodlarosa CC BY-SA 3.0Waters and his team are creating an environmental background so they can reconstruct the climate and vegetation in the region for the last 20,000 years. The hope is to build a picture of the first Americans.“Right now we are in a time of new discoveries and new ideas about the first Americans,” Waters told Newsweek. “It will take a lot of time, but eventually more sites will be found, excavated, and studied. Also, more genetic information will come from the analysis of ancient human remains. The two lines of evidence—archaeological and genetic—are beginning to converge and tell a coherent story of the first Americans.”The team made their discovery at the Debra L. Friedkin site, named for the family who owns the land, about 40 miles northwest of Austin in Central Texas. The site has undergone extensive archaeological work for the past 12 years.Map of the Americas showing pre-Clovis sites. Photo by Pratyeka CC BY-SA 4.0“The findings expand our understanding of the earliest people to explore and settle North America,” Waters said.“The peopling of the Americas during the end of the last Ice Age was a complex process and this complexity is seen in their genetic record. Now we are starting to see this complexity mirrored in the archaeological record.”The Clovis won their name because of artifacts found near Clovis, New Mexico, in 1932. Other evidence included a mammoth skeleton with a spear-point in its ribs, found by a cowboy in 1926 near Folsom, New Mexico. Clovis sites have since been identified throughout the United States, as well as Mexico and Central America.Read another story from us: 8,000-yr-old Dishes Reveal Surprisingly Varied Neolithic Eating HabitsThe Clovis people were generally regarded as the first human inhabitants of the New World, and ancestors of all the indigenous cultures of North and South America.last_img read more

CONTINUE READING