Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Brink Bites: Using AI to Detect Alzheimer’s Disease; NIH Supports COPD Research in BU | The edge

October 17, 2025

NSF Announces Funding to Establish National AI Research Resources Operations Center | NSF

October 17, 2025

Cutting-edge imaging and AI research looking for tiny defects in chips

October 17, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI Research Updates»A matter of taste: Electronic tongue reveals AI’s inner thoughts
AI Research Updates

A matter of taste: Electronic tongue reveals AI’s inner thoughts

November 19, 2024004 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Art Suggestion 4x3.png
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

The tongue includes a graphene-based ion-sensitive field-effect transistor, or conductive device capable of detecting chemical ions, linked to an artificial neural network, trained on various datasets. Critically, Das noted, the sensors are not functionalized, meaning that a single sensor can detect different types of chemicals, rather than having a specific sensor dedicated to each potential chemical. The researchers provided the neural network with 20 specific parameters to evaluate, all related to how a liquid sample interacts with the electrical properties of the sensor. Based on these parameters specified by the researchers, AI could accurately detect samples — including diluted milks, different types of sodas, coffee blends, and multiple fruit juices at multiple freshness levels — and report of their content with an accuracy greater than 80%. in about a minute.

“Having achieved reasonable accuracy with human-selected parameters, we decided to let the neural network define its own figures of merit by feeding it raw sensor data. We found that the neural network achieved near-ideal inference accuracy of over 95% using machine-derived figures of merit rather than those provided by humans,” said co-author Andrew Pannone, a doctoral student in engineering sciences and mechanics. advised by Das “So we used a method called additive Shapley explanations, which allows us to ask the neural network what it was thinking after it made a decision.”

This approach uses game theory, a decision-making process that takes into account the choices of others to predict the outcome of a single participant and assign values ​​to the considered data. Using these explanations, the researchers were able to reverse engineer how the neural network weighed various components of the sample to make a final decision, giving the team insight into the neural network’s decision-making process, which has remained largely opaque in the area. of AI, according to the researchers. They discovered that instead of simply evaluating individual human-assigned parameters, the neural network considered the data it determined to be most important together, with Shapley’s additive explanations revealing how much importance the neural network took. takes into account each input data.

The researchers explained that this assessment could be compared to two people drinking milk. They can both identify that it is milk, but one person may think it is skim milk that has gone skimming while the other thinks it is 2% still fresh . The nuances of why are not easy to explain, even by the person doing the assessment.

“We found that the network was looking at more subtle features in the data – things that we, as humans, struggle to properly define,” Das said. “And because the neural network considers the characteristics of the sensors holistically, it mitigates the variations that can occur on a daily basis. When it comes to milk, the neural network can determine the varying water content of milk and, in this context, determine whether indicators of spoilage are significant enough to be considered a food safety issue.

According to Das, the tongue’s capabilities are only limited by the data it is trained on, meaning that while this study was focused on food evaluation, it could also be applied to medical diagnosis. And while sensitivity is important regardless of where the sensor is applied, the robustness of their sensors paves the way for large-scale deployment across different industries, the researchers said.

Das explained that the sensors do not need to be exactly the same, because machine learning algorithms can look at all the information together and still produce the correct response. This makes the manufacturing process more convenient and less expensive.

“We realized that we can live with imperfection,” Das said. “And that’s what nature is: it’s full of imperfections, but it can still make solid decisions, just like our electronic language.”

Das is also affiliated with Materials Research Institute and the departments of electrical engineering and materials science and engineering. Other contributors from Penn State’s Department of Engineering and Mechanical Sciences include Aditya Raj, a research technologist at the time of the research; Sarbashis Das, a graduate student at the time of the research who received his doctorate in electrical engineering in May; Ziheng Chen, graduate student in engineering sciences and mechanics; and Collin A. Price, a graduate student who earned his bachelor of science degree in engineering science and mechanics in May. Mahmooda Sultana of NASA’s Goddard Space Flight Center also contributed.

A Space Technology Graduate Research Opportunities grant from NASA supported this work.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

Brink Bites: Using AI to Detect Alzheimer’s Disease; NIH Supports COPD Research in BU | The edge

October 17, 2025

NSF Announces Funding to Establish National AI Research Resources Operations Center | NSF

October 17, 2025

Cutting-edge imaging and AI research looking for tiny defects in chips

October 17, 2025
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (29)
  • AI in Business (75)
  • AI in Healthcare (64)
  • AI in Technology (78)
  • AI Logistics (24)
  • AI Research Updates (42)
  • AI Startups & Investments (64)
  • Chain Risk (37)
  • Smart Chain (32)
  • Supply AI (21)
  • Track AI (33)

Brink Bites: Using AI to Detect Alzheimer’s Disease; NIH Supports COPD Research in BU | The edge

October 17, 2025

NSF Announces Funding to Establish National AI Research Resources Operations Center | NSF

October 17, 2025

Cutting-edge imaging and AI research looking for tiny defects in chips

October 17, 2025

AI is a strategic tool to improve scientific research

October 17, 2025

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (29)
  • AI in Business (75)
  • AI in Healthcare (64)
  • AI in Technology (78)
  • AI Logistics (24)
  • AI Research Updates (42)
  • AI Startups & Investments (64)
  • Chain Risk (37)
  • Smart Chain (32)
  • Supply AI (21)
  • Track AI (33)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2025 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.