Experts examine how AI ethics can fix inequities in personalized nutrition
The AI revolution is transforming personalized nutrition, giving people greater access to health solutions. However, without ethical frameworks, the technology risks deepening health inequalities and cultural gaps, warn researchers.
Ahead of their research publication in Frontiers Aging, Nutrition Insight speaks to authors Mariëtte Abrahams, Ph.D., CEO and founder of Qina, a nutrition innovation consultancy, and Maria Raimundo, nutrition tech consultant, to learn more.
They call for transparent and inclusive AI to build trust and drive meaningful behavior change.
What are your views about AI in nutrition?
Abrahams: AI is an inevitable future, especially regarding nutrition. Different AI technologies can be leveraged, from prevention to clinical or medical nutrition, to solve problems and reduce the burden on practitioners, such as menu planning. We also tend to think of AI as a rapidly developing technology in developed countries, but it has incredible applications in developing countries, too.
On a more cautious note, AI comes with many challenges and gaps, many of which boil down to bias in data and building algorithms, black box approaches, and a lack of continuous feedback. AI will change the workforce, how we work, and also how we serve the public. However, we need to ensure at this critical stage that humans are still in the loop and that the relevant regulatory frameworks are followed to protect society and reduce the risk of widening inequality.
Raimundo: AI can be a powerful tool in nutrition, offering the potential to personalize dietary recommendations, improve access to healthier and more sustainable food choices, and enhance adherence to health plans. Its applications are broad: AI might assist practitioners during or after consultations, help food manufacturers reformulate products or improve sustainability, or support consumers in making informed decisions — what to eat, how to cook it, and how to adapt it to their needs.
Mariëtte Abrahams, PhD, MBA, CEO & Founder of Qina.However, for AI to be beneficial and safe, it must be grounded in robust, up-to-date scientific evidence and developed by multidisciplinary, diverse teams. This ensures inclusivity, accuracy, and ethical integrity. Equally important is the need for diverse datasets to train these systems, which is key to avoiding bias and ensuring global relevance.
Who gains most from AI in nutrition, and who gets left out?
Abrahams: Historically, tech giants have gained the most from AI because consumers have become accustomed to using social media platforms and platforms that provide recommendations and comparisons (e.g., Netflix). This means they sit with much user data, where the consumer is the product. However, regulators and consumers are catching up, and there is an increased awareness of having control over your data.
It is important to note that around 40% of European citizens have difficulties with digital literacy and around 50% with health literacy. So, AI on its own does not solve this problem. Society has to do otherwise, or individuals who need access to nutrition, food, and health resources the most will lose out. Research has already demonstrated that people who are left out often include individuals who do not have internet or smartphone access, the older generation, or those who do not have access to health services and are of a lower socioeconomic group.
Raimundo: Those who gain the most from AI in nutrition include healthcare professionals, patients, researchers, and the food industry — all of whom benefit from better data, tools, and decision-making support.
Maria Raimundo, MSc, nutrition tech consultant.Unfortunately, the populations that could benefit the most are often left out: low-income individuals and underserved communities due to limited access to technology or education, ethnic minorities due to language or cultural mismatches, and populations in underdeveloped regions due to a lack of localized data. These exclusions highlight the urgent need for equity-focused AI development.
What if AI’s “healthy” advice clashes with personal or cultural values?
Abrahams: Research has already demonstrated that AI algorithms in nutrition are frequently built using limited food databases. In addition, cultural or non-Western foods are usually excluded from these algorithms. This means that clashes are inevitable right now, based on who is developing them, as they can also bring their own biases and preferences into algorithms. However, if more consumers voice their views and digital adoption increases from a wider audience, things can improve.
Raimundo: That very concern is why we argue for a comprehensive framework that embeds cultural relevance and adaptability at every development stage. Most AI-driven solutions still lack cultural sensitivity, which can exclude individuals based on dietary beliefs, religious practices, or traditional food systems.
To address this, AI systems should offer customization and clear communication channels so users can express concerns or mismatches. Solution providers should treat this feedback as essential to the model improvement process.
Can AI-generated personalized advice weaken public health messages?
Abrahams: Actually, I believe they strengthen them. As with a lack of AI literacy, there is also a lot of food and health illiteracy. AI can, therefore, be used to personalize content based on tone, depth of explanation, literacy levels, and timing, depending on when the user is most open to receiving nudges and advice. Essentially, meeting individuals where they are. This is exactly why we wrote the paper: you cannot just have personalized advice without helping users act on it. This is where AI can really help.
Raimundo: I believe personalized diets can complement public health strategies by helping individuals translate general recommendations into actions that fit their lifestyle, medical needs, and preferences. The key lies in integration. Namely, personalized diets and public health approaches must work in synergy, not opposition.AI tools in nutrition risk excluding the very groups that need them most, such as those with low digital or health literacy.
How can AI be transparent if its workings are a black box?
Abrahams: There are very good guidelines to ensure companies are transparent about which data they use to train the algorithm. Ideally, you will have a human nutrition expert to evaluate the output. This means that even if the algorithm is a black box, you still have some way of understanding how it came to the recommendation.
Raimundo: Transparency is central to trust in AI. That’s why our framework advocates for clarity at every step: which data sources are used, which experts are involved, how decisions are made, and how users can challenge or understand the outputs. Therefore, it’s key to pair AI solutions with documentation, validation studies, and human oversight to ensure that users, practitioners, and regulators can evaluate the AI’s logic, fairness, and reliability.
Does AI risk replacing human judgment in health choices?
Abrahams: Yes, humans are naturally lazy and will choose the easier option, and AI can enhance this behavior. As we have seen with some generative AI technologies, they can hallucinate, which means that AI users should always be critical before acting on the advice. Thankfully, many surveys have indicated that consumers do not trust advice from social media, therefore, using nutrition experts as part of the human-in-the-loop service can help reduce mistakes.
Raimundo: There is a risk, particularly if AI is misused or over-relied upon. However, in our current context, where nutrition literacy is often low and access to professional guidance is limited, AI can serve as a bridge, helping people make better-informed decisions.
We view AI not as a replacement for human expertise but as a supportive tool. Proper education, ethical oversight, and human-AI collaboration can drive positive behavioral change while keeping human judgment central to health decision-making.