Tuesday, November 19, 2024

Different training could cut AI power use by 30%

Rows of electricity pylons connected by power lines against an orange sky.

A less wasteful way to train large language models, such as the GPT series, finishes in the same amount of time for up to 30% less energy, according to a new study.

The approach could save enough energy to power 1.1 million US homes in 2026, based on Wells Fargo’s projections of AI power demand. It could also take a bite out of the International Monetary Fund’s prediction that data centers could account for 1.2% of the world’s carbon emissions by 2027—and the water demands that come with that energy use.

Some experts say that these costs could be outweighed by environmental benefits. They argue that AI could be a “game changer” for fighting climate change by identifying ways to optimize supply chains and the grid, manage our energy needs, and improve research on climate change.

Still, that doesn’t excuse squandering energy, and some of the power used to train AI has zero impact on training time and model accuracy.

“Why spend something when there’s no point?” says Mosharaf Chowdhury, a University of Michigan associate professor of computer science and engineering and the corresponding author of the study presented at the 30th Symposium on Operating Systems Principles.

“We can’t keep building bigger and bigger data centers because we won’t have the power to run them. If we can reduce the energy consumed by AI, we can reduce AI’s carbon footprint and cooling requirements and allow for more computation to fit within our current energy constraints.”

The energy waste is created when AI training is unequally divided between GPUs, which are computer processors specialized for large data and graphics applications. Although it opens the door for waste, splitting the work is necessary for processing huge datasets.

“AI models today are so large, they cannot fit inside a single computer processor,” says Jae-Won Chung, a doctoral student in computer science and engineering and the first author of the study.

“They need to be divided into tens of thousands of processors to be trained, but dividing the models in perfectly equal sizes across all processors is practically impossible.”

The training jobs are so difficult to evenly split up because some tasks need to be grouped together on the same processor—like how each installment of a book series will be grouped together in an organized shelf. Depending on how the tasks are grouped, some processors might get stuck with the AI-training equivalent of the Encyclopedia Britannica while others get assigned a fantasy trilogy.

Because current training methods run each processor at top speed, processors with a lighter load will finish their calculations before other processors. This doesn’t speed up training, which isn’t complete until every processor finishes its job—but it is wasteful because faster calculations require more energy. In addition, problems such as faulty hardware or network delays create energy waste by slowing down a single processor’s computing speed.

To save energy, the researchers developed a software tool, called Perseus, that identifies a critical path, or a series of subtasks that will take the longest time to complete. Then, Perseus slows down processors that aren’t on the critical path so that they all finish their jobs around the same time—eliminating unnecessary power use.

“Reducing the power cost of AI can have important implications for equitable AI access,” Chowdhury says. “If a country doesn’t have enough power to run a big model, they might need to use services from far away, or be stuck running smaller, less accurate models. This gap could further perpetuate disparity between different communities.”

The team tested Perseus by training GPT-3, three other large language models and one computer vision model.

Perseus is an open-sourced tool available as part of Zeus, a tool for measuring and optimizing AI energy consumption.

Funding for the research came from the National Science Foundation, Dutch Research Council (NWO) Talent Programme, VMware, Mozilla Foundation, Salesforce, and Kwanjeong Educational Foundation. Chameleon Cloud and CloudLab supported the research by providing computational resources.

Source: University of Michigan

The post Different training could cut AI power use by 30% appeared first on Futurity.



from Futurity https://ift.tt/B3cs1CX

Ultra-processed foods are a danger for people with type 2 diabetes

An open bag of potato chips.

Consuming more ultra-processed foods—from diet sodas to packaged crackers to certain cereals and yogurts—is closely linked with higher blood sugar levels in people with Type 2 diabetes, researchers report.

In a paper in the Journal of the Academy of Nutrition and Dietetics, the team describes how—even more than just the presence of sugar and salt in the diet—having more ultra-processed foods laden with additives can lead to higher average blood glucose levels over a period of months, a measure called HbA1C.

“There are a lot of ways to look at and measure healthy eating,” says senior author Marissa Burgermaster, assistant professor of nutritional sciences at the University of Texas. “We set out to see which measurement was associated with blood sugar control in people with type 2 diabetes.

“We found that the more ultra-processed foods by weight in a person’s diet, the worse their blood sugar control was, and the more minimally processed or unprocessed foods in a person’s diet, the better their control was.”

The study used baseline data from an ongoing clinical trial called Texas Strength Through Resilience in Diabetes Education (TX STRIDE), led by Mary Steinhardt in UT’s College of Education. Participants included 273 African American adults diagnosed with type 2 diabetes and recruited through Austin-area churches. Each participant provided two 24-hour diet recalls and a blood sample to measure HbA1C.

The researchers examined the diet recalls and scored them against three widely used indexes that look at the overall quality or nutrition in a person’s diet, but those tools were not associated with blood glucose control. Instead, how many grams of ultra-processed food the participants ate or drank was linked to worse control, and a correspondingly better control occurred in participants who ate more whole foods or foods and drinks with minimal processing.

Recent studies have indicated that eating more ultra-processed foods is linked to higher rates of cardiovascular disease, obesity, sleep disorders, anxiety, depression, and early death.

Ultra-processed foods are typically higher in added sugars and sodium, but the researchers concluded that the A1C increases were not about merely added sugar and sodium, or they would have correlated with the tools that measure overall nutritional quality in the diet. Synthetic flavors, added colors, emulsifiers, artificial sweeteners, and other artificial ingredients may be in part to blame, hypothesizes Erin Hudson, a graduate student author of the paper, and this would suggest that dietary guidelines may need to begin to place more emphasis on ultra-processed foods.

For participants of the study who were not on insulin therapy, a diet with 10% more of its overall grams of food being ultra-processed was associated with HbA1C levels that were, on average, 0.28 percentage points higher.

Conversely, those whose diet contained a 10% higher amount of overall food being minimally processed or unprocessed had HbA1C levels, on average, 0.30 percentage points lower.

Having an HbA1C below 7 is considered ideal for people with Type 2 diabetes, and people who consumed, on average, 18% or fewer of their grams of food from ultra-processed foods were more likely to meet this mark.

Funding for the research came from the National Institutes of Health.

Source: UT Austin

The post Ultra-processed foods are a danger for people with type 2 diabetes appeared first on Futurity.



from Futurity https://ift.tt/9iIFqX5

Monday, November 18, 2024

Listening gives robots human-like touch

A young woman holds a cup of soda.

Researchers have given robots a sense of touch via “listening” to vibrations, allowing them to identify materials, understand shapes, and recognize objects just like human hands

Imagine sitting in a dark movie theater wondering just how much soda is left in your oversized cup. Rather than prying off the cap and looking, you pick up and shake the cup a bit to hear how much ice is inside rattling around, giving you a decent indication of if you’ll need to get a free refill.

Setting the drink back down, you wonder absent-mindedly if the armrest is made of real wood. After giving it a few taps and hearing a hollow echo however, you decide it must be made from plastic.

This ability to interpret the world through acoustic vibrations emanating from an object is something we do without thinking. And it’s an ability that researchers are on the cusp of bringing to robots to augment their rapidly growing set of sensing abilities.

Set to be published at the Conference on Robot Learning (CoRL 2024) being held November 6–9 in Munich, Germany, new research from Duke University details a system dubbed SonicSense that allows robots to interact with their surroundings in ways previously limited to humans.

A robot "hand" with four "fingers," each with a microphone for sensing.
The ability to feel acoustic vibrations through tactile interactions gives this robotic hand a human-like sense of touch to better perceive the world. (Credit: Duke)

“Robots today mostly rely on vision to interpret the world,” explains Jiaxun Liu, lead author of the paper and a first-year PhD student in the laboratory of Boyuan Chen, professor of mechanical engineering and materials science at Duke.

“We wanted to create a solution that could work with complex and diverse objects found on a daily basis, giving robots a much richer ability to ‘feel’ and understand the world.”

SonicSense features a robotic hand with four fingers, each equipped with a contact microphone embedded in the fingertip. These sensors detect and record vibrations generated when the robot taps, grasps, or shakes an object. And because the microphones are in contact with the object, it allows the robot to tune out ambient noises.

Based on the interactions and detected signals, SonicSense extracts frequency features and uses its previous knowledge, paired with recent advancements in AI, to figure out what material the object is made out of and its 3D shape. If it’s an object the system has never seen before, it might take 20 different interactions for the system to come to a conclusion. But if it’s an object already in its database, it can correctly identify it in as little as four.

“SonicSense gives robots a new way to hear and feel, much like humans, which can transform how current robots perceive and interact with objects,” says Chen, who also has appointments and students from electrical and computer engineering and computer science. “While vision is essential, sound adds layers of information that can reveal things the eye might miss.”

In the paper and demonstrations, Chen and his laboratory showcase a number of capabilities enabled by SonicSense. By turning or shaking a box filled with dice, it can count the number held within as well as their shape. By doing the same with a bottle of water, it can tell how much liquid is contained inside. And by tapping around the outside of an object, much like how humans explore objects in the dark, it can build a 3D reconstruction of the object’s shape and determine what material it’s made from.

While SonicSense is not the first attempt to use this approach, it goes further and performs better than previous work by using four fingers instead of one, touch-based microphones that tune out ambient noise and advanced AI techniques. This setup allows the system to identify objects composed of more than one material with complex geometries, transparent or reflective surfaces, and materials that are challenging for vision-based systems.

“While most datasets are collected in controlled lab settings or with human intervention, we needed our robot to interact with objects independently in an open lab environment,” says Liu. “It’s difficult to replicate that level of complexity in simulations. This gap between controlled and real-world data is critical, and SonicSense bridges that by enabling robots to interact directly with the diverse, messy realities of the physical world.”

Assistant Professor of Mechanical Engineering & Materials Science and Computer Science at Duke University
These abilities make SonicSense a robust foundation for training robots to perceive objects in dynamic, unstructured environments. So does its cost; using the same contact microphones that musicians use to record sound from guitars, 3D printing and other commercially available components keeps the construction costs to just over $200.

Moving forward, the group is working to enhance the system’s ability to interact with multiple objects. By integrating object-tracking algorithms, robots will be able to handle dynamic, cluttered environments — bringing them closer to human-like adaptability in real-world tasks.

Another key development lies in the design of the robot hand itself. “This is only the beginning. In the future, we envision SonicSense being used in more advanced robotic hands with dexterous manipulation skills, allowing robots to perform tasks that require a nuanced sense of touch,” Chen says. “We’re excited to explore how this technology can be further developed to integrate multiple sensory modalities, such as pressure and temperature, for even more complex interactions.”

Support for the work came from the Army Research laboratory STRONG program and DARPA’s FoundSci program and TIAMAT.

Source: Duke University

The post Listening gives robots human-like touch appeared first on Futurity.



from Futurity https://ift.tt/b3POWyo

Gender and socializing shape gamers’ eating habits

A young man reaches for a potato chip while sitting in front of a computer and gaming.

The myth of junkfood-eating gamers is actually about social hunger and gender, according to new research.

“The eating habits of gamers are actually attributable to them being social creatures,” explains Thomas Skelly from the food and resource economics department at the University of Copenhagen.

“If they live with others, they prioritize the social aspect of meals and often make an effort to prepare food. If not, and they live alone—as an increasing number of Danes do—it’s often about quickly finishing a meal to get back to socializing with friends online.”

Together with research colleague Kristian Haulund Jensen from the psychology department at Aarhus University, Skelly examined existing research on the topic and then combined it with his own data from fourteen young gamers using diary entries, qualitative interviews, and focus groups.

The new study concludes that the main factor influencing whether young gamers opt for a frozen supermarket pizza or homemade stew is whether the most attractive social activity is centered around cooking and dining or online in the game waiting for them.

Moreover, a significant gender difference emerged, which we’ll return to later.

Previous research missed the everyday aspect

According to the researchers, previous studies on this topic have been lackluster because they overemphasized LAN-parties, where gamers gather to play in groups big and small.

LAN parties are a form of social gathering, where gamers sit among rows of computers and play side by side. This event format peaked in the 90s and 2000s before rapid internet made online gaming possible. But these gatherings continue as a cultural highlight for many gamers.

A LAN party can involve as few as two people playing together on a local computer network, but they’re often much larger, with several hundred or even thousands of participants. The official record was set in Sweden in 2013, where 22,810 gamers attended the DreamHack event.

“The eating habits of most people vary between everyday life and for special occasions and it’s no different in gamer culture. In everyday life, gamers, like other young people, are somewhat driven by the need for a quick bite. But when they gather at major events, there’s an inherent culture of unhealthy eating often being washed down with energy drinks and soda. In large part, this is where the stereotypes originate,” explains Jensen.

Therefore, the researchers want to differentiate between two types of food in understanding gamer food culture—”gamer food” and “gaming food.”

The first type is closely associated with social gamer events, such as LAN parties. Here, according to the researchers, the intake of junk food is a symbolic act.

“This excess of pizza, chips, cola, etc., is heavily symbolic in a kind of celebratory ritual of gaming culture. But this ritualistic junk-food-eating is strongly associated with the stereotype of the unhealthy gamer, even though it’s not an everyday phenomenon, as shown in our review of earlier studies,” says Skelly.

As such, the researchers propose a different term: “gaming food,” which refers to the daily eating habits of gamers. This might involve “fast food” because speed can be crucial.

“If the priority is to get back to socializing with friends in an online game, it needs to be quick. But it doesn’t necessarily have to be junk food—a sandwich on dark Danish rye bread is equally fast,” explains Skelly.

Nevertheless, the researchers did find a pattern regarding whether the chosen quick “gaming food” is healthy or unhealthy, which has more to do with gender than gamer culture.

There is a significant difference in the norms male and female gamers have regarding the food they consume. According to the researchers, this somewhat aligns with gender-based eating habits in other contexts. The same applies to household priorities, like cleaning.

Youth itself is another factor that can influence the unhealthy eating habits among gamers. Most gamers are young, and young people’s attitudes toward food are typically marked by a rebelliousness against parental expectations for healthy eating.

Furthermore, the researchers point out that young people typically don’t have much money for food, which can affect the quality of what they eat as well.

Both of these factors, combined with the need to eat quickly so as to not miss out on socializing with friends online, intersect with gender norms. Gaming is dominated by young men, and a masculine trait regarding food, according to the researchers, is less concern for health than women have.

“Women participants showed much more awareness of health and household ideals than the men. These considerations are essential in managing everyday life, where they often play no significant role for male gamers,” says Kristian Haulund Jensen.

He believes that the difference is partly due to society placing higher demands on women’s bodies, appearance, and homes—which are expected to be presentable. This creates a different form of shame among women about being perceived as unhealthy or unclean.

“So, whereas men are more inclined to satisfy a craving with junk food from a convenience store and leave the trash behind, women might instead make a rye bread sandwich in the kitchen and tidy up afterward,” says Jensen.

According to the researchers, the reputation of gamer culture is more a result of gaming historically being dominated by men (especially in earlier years) rather than being inherent to the culture—at least when looking at everyday life.

“At major events like LAN parties, other mechanisms come into play. Here, the significant majority of men among gamers has created some traditions that apply to the environment as a whole, including its food culture. But if you imagine a large LAN event with only women gamers, it’s easy to imagine that things would look healthier between the rows of computers,” says Skelly.

The research appears in Convergence: The International Journal of Research into New Media Technologies.

Source: University of Copenhagen

The post Gender and socializing shape gamers’ eating habits appeared first on Futurity.



from Futurity https://ift.tt/nBLkahP

Robot trained on surgery videos performs as well as human docs

Two robot hands hold onto a suture during a practice session.

A robot, trained for the first time by watching videos of seasoned surgeons, executed the same surgical procedures as skillfully as the human doctors.

The successful use of imitation learning to train surgical robots eliminates the need to program robots with each individual move required during a medical procedure and brings the field of robotic surgery closer to true autonomy, where robots could perform complex surgeries without human help.

The findings, led by Johns Hopkins University researchers, are being spotlighted this week at the Conference on Robot Learning in Munich.

“It’s really magical to have this model and all we do is feed it camera input and it can predict the robotic movements needed for surgery,” says senior author Axel Krieger, an assistant professor in Johns Hopkins University’s mechanical engineering department. “We believe this marks a significant step forward toward a new frontier in medical robotics.”

The researchers used imitation learning to train the da Vinci Surgical System robot to perform three fundamental tasks required in surgical procedures: manipulating a needle, lifting body tissue, and suturing. In each case, the robot trained on the team’s model performed the same surgical procedures as skillfully as human doctors.

The model combined imitation learning with the same machine learning architecture that underpins ChatGPT. However, where ChatGPT works with words and text, this model speaks “robot” with kinematics, a language that breaks down the angles of robotic motion into math.

The researchers fed their model hundreds of videos recorded from wrist cameras placed on the arms of da Vinci robots during surgical procedures. These videos, recorded by surgeons all over the world, are used for post-operative analysis and then archived. Nearly 7,000 da Vinci robots are used worldwide, and more than 50,000 surgeons are trained on the system, creating a large archive of data for robots to “imitate.”

While the da Vinci system is widely used, researchers say it’s notoriously imprecise. But the team found a way to make the flawed input work. The key was training the model to perform relative movements rather than absolute actions, which are inaccurate.

“All we need is image input and then this AI system finds the right action,” says lead author Ji Woong “Brian” Kim, a postdoctoral researcher at Johns Hopkins. “We find that even with a few hundred demos, the model is able to learn the procedure and generalize new environments it hasn’t encountered.”

“The model is so good learning things we haven’t taught it,” adds Krieger. “Like if it drops the needle, it will automatically pick it up and continue. This isn’t something I taught it do.”

The model could be used to quickly train a robot to perform any type of surgical procedure, the researchers say. The team is now using imitation learning to train a robot to perform not just small surgical tasks but a full surgery.

Before this advancement, programming a robot to perform even a simple aspect of a surgery required hand-coding every step. Someone might spend a decade trying to model suturing, Krieger says. And that’s suturing for just one type of surgery.

“It’s very limiting,” Krieger says. “What is new here is we only have to collect imitation learning of different procedures, and we can train a robot to learn it in a couple days. It allows us to accelerate to the goal of autonomy while reducing medical errors and achieving more accurate surgery.”

Additional authors are from Johns Hopkins and Stanford University.

Source: Johns Hopkins University

The post Robot trained on surgery videos performs as well as human docs appeared first on Futurity.



from Futurity https://ift.tt/10Eey7i

Friday, November 15, 2024

Satellite and brain data show environmental impacts on young brains

A young boy touches his head with his left hand while looking down.

A pioneering new study links satellite and brain imaging data to identify how environmental factors can impact mental health, cognition and brain development in young people.

The research appears in the journal Nature Mental Health.

The study represents an advance in understanding how specific environmental conditions may affect the brains of young people.

“The findings highlight the importance of the urban environment in mental health. We see a critical window during childhood and adolescence where environmental factors can shape future cognitive and behavioral development,” says the study’s senior author and principal investigator Vince Calhoun, a professor of psychology at Georgia State University. Calhoun has faculty appointments at Georgia Tech and Emory University, and leads the Center for Translational Research in Neuroimaging and Data Science Center.

The researchers used a dataset from the Adolescent Brain Cognitive Development (ABCD) Study, which is the largest ongoing study on child brain development in the US. For the study, the team analyzed data collected from 11,800 children across 21 US cities.

Calhoun says by linking fMRI imaging with satellite data, including the location of study participants, researchers were able to more robustly identify how the physical environment influences cognition and mental health outcomes in children ages 9 to 10.

Collaborating closely with the ABCD team, the researchers released their results as part of ABCD Data Release 5.0. This enables the research community to address critical questions regarding the connection between the environment and mental health.

Lead author and New Light Technologies Chief Scientist Ran Goldblatt says researchers analyzed satellite-based observations, including different types of land cover and land use and the amount of light emitted at night as captured by satellites. These “UrbanSat” data can be coupled to neuroimaging and behavioral measures to provide insights.

“The ABCD dataset provides a unique opportunity for a much deeper understanding of associations between a range of indicators of the complex physical urban environment and their impacts on mental health,” Goldblatt says.

“This dataset also allows us to observe dynamic environmental changes and their impact on mental health over time, pinpointing specific interventions to boost mental well-being in various communities.”

The study looked at how land is used, including factors like light pollution and the number of buildings in an area, as a way to understand the area’s social and economic status. The researchers found that places with more light at night and more buildings tended to have lower levels of parental education and household income, while areas with more trees and plants were linked to higher education and income.

“With the precise, objective measurements of environmental aspects such as greenspaces, the density of urban areas and water bodies, the ABCD dataset can enrich our understanding of how physical surroundings impact brain activity through diverse complex physiological, psychological and social processes,” Calhoun says.

“In this new study, we see that unique environmental and physical features may impact the extent and patterns of the brain’s gray and white matter and its functional network connectivity.”

Additional researchers from Heidelberg University in Mannheim, Germany; Rutgers University; New York Medical College School of Medicine; the University of California, San Diego; the University of Southern California in Los Angeles; the Laureate Institute for Brain Research; Tianjin Medical University General Hospital in Tianjin, China; and the Centre for Population Neuroscience and Stratified Medicine (PONS) in Berlin contributed to the work.

Funding for the work came from the National Institutes of Health.

Source: University of Georgia

The post Satellite and brain data show environmental impacts on young brains appeared first on Futurity.



from Futurity https://ift.tt/ADEdnbU

Meteorite holds evidence of water on ancient Mars

The meteorite, a small dark rock, sits on a small, clear plastic stand.

A meteorite contains evidence of liquid water on Mars 742 million years ago, researchers report.

An asteroid struck Mars 11 million years ago and sent pieces of the red planet hurtling through space. One of these chunks of Mars eventually crashed into the Earth somewhere near Purdue University and is one of the few meteorites that can be traced directly to Mars.

This meteorite was rediscovered in a drawer at Purdue University in 1931 and therefore named the Lafayette Meteorite.

During early investigations of the Lafayette Meteorite, scientists discovered that it had interacted with liquid water while on Mars. Scientists have long wondered when that interaction with liquid water took place.

Researchers have recently determined the age of the minerals in the Lafayette Meteorite that formed when there was liquid water. The findings appear in Geochemical Perspective Letters.

Marissa Tremblay, assistant professor with the earth, atmospheric, and planetary sciences department (EAPS) at Purdue University, is the lead author of this publication. She uses noble gases like helium, neon, and argon to study the physical and chemical processes shaping the surfaces of Earth and other planets. She explains that some meteorites from Mars contain minerals that formed through interaction with liquid water while still on Mars.

“Dating these minerals can therefore tell us when there was liquid water at or near the surface of Mars in the planet’s geologic past,” she says.

“We dated these minerals in the Martian meteorite Lafayette and found that they formed 742 million years ago. We do not think there was abundant liquid water on the surface of Mars at this time. Instead, we think the water came from the melting of nearby subsurface ice called permafrost, and that the permafrost melting was caused by magmatic activity that still occurs periodically on Mars to the present day.”

In the new paper, her team demonstrate that the age obtained for the timing of water-rock interaction on Mars was robust and that the chronometer used was not affected by things that happened to Lafayette after it was altered in the presence of water.

“The age could have been affected by the impact that ejected the Lafayette Meteorite from Mars, the heating Lafayette experienced during the 11 million years it was floating out in space, or the heating Lafayette experienced when it fell to Earth and burned up a little bit in Earth’s atmosphere,” she says. “But we were able to demonstrate that none of these things affected the age of aqueous alteration in Lafayette.”

Ryan Ickert, senior research scientist with Purdue EAPS, is a coauthor of the paper. He uses heavy radioactive and stable isotopes to study the timescales of geological processes. He demonstrated that other isotope data (previously used to estimate the timing of water-rock interaction on Mars) were problematic and had likely been affected by other processes.

“This meteorite uniquely has evidence that it has reacted with water. The exact date of this was controversial, and our publication dates when water was present,” he says.

Origin story

Thanks to research, quite a bit is known about the Lafayette Meteorite’s origin story. It was ejected from the surface of Mars about 11 million years ago by an impact event.

“We know this because once it was ejected from Mars, the meteorite experienced bombardment by cosmic ray particles in outer space, that caused certain isotopes to be produced in Lafayette,” Tremblay says. “Many meteoroids are produced by impacts on Mars and other planetary bodies, but only a handful will eventually fall to Earth.”

But once Lafayette hit Earth, the story gets a little muddy. It is known for certain that the meteorite was found in a drawer at Purdue University in 1931. But how it got there is still a mystery. Tremblay and others made strides in explaining the history of the post-Earth timeline in a recent publication.

“We used organic contaminants from Earth found on Lafayette (specifically, crop diseases) that were particularly prevalent in certain years to narrow down when it might have fallen, and whether the meteorite fall may have been witnessed by someone,” Tremblay says.

Time capsule

Meteorites are solid time capsules from planets and celestial bodies from our universe. They carry with them bits of data that can be unlocked by geochronologists. They set themselves apart from rocks that may be found on Earth by a crust that forms from its descent through our atmosphere and often form a fiery entrance visible in the night’s sky.

“We can identify meteorites by studying what minerals are present in them and the relationships between these minerals inside the meteorite,” says Tremblay.

“Meteorites are often denser than Earth rocks, contain metal, and are magnetic. We can also look for things like a fusion crust that forms during entry into Earth’s atmosphere. Finally, we can use the chemistry of meteorites (specifically their oxygen isotope composition) to fingerprint which planetary body they came from or which type of meteorite it belongs to.”

Dating the alteration minerals in Lafayette and, more generally, in this class of meteorites from Mars called nakhlites, has been a long-term objective in planetary science because scientists know that the alteration happened in the presence of liquid water on Mars. However, these materials are especially difficult to date, and previous attempts at dating them had either been very uncertain and/or likely affected by processes other than aqueous alteration.

“We have demonstrated a robust way to date alteration minerals in meteorites that can be applied to other meteorites and planetary bodies to understand when liquid water might have been present,” Tremblay says.

Additional researchers from the Scottish Universities Environmental Research Centre (SUERC), the Department of Earth and Environmental Science at the University of St Andrews, the School of Geographical and Earth Sciences at the University of Glasgow, the School of Earth Sciences at the University of Bristol, and the Science Group at The Natural History Museum in London contributed to the work.

Source: Purdue University

The post Meteorite holds evidence of water on ancient Mars appeared first on Futurity.



from Futurity https://ift.tt/YfvedJc