Moreover, the bias encountered in chatbot responses can hinder the retrieval of scientific papers and systematic reviews that delve into the potential risks associated with oral contraceptives. These studies may present nuanced findings, highlighting adverse effects, contraindications, or specific populations for whom caution is advised. However, due to the bias toward promoting contraceptive use, the chatbot may overlook or underrepresent such studies, limiting individuals’ exposure to critical information. Various factors such as optimization techniques, sponsored content, and commercial interests can influence the visibility and ranking of information, potentially skewing the presentation of viewpoints. In the context of public health, biases can significantly impact the availability and accessibility of information related to oral contraceptive pills and their associated health risks.
In navigating information gleaned from such chatbots, it is imperative for individuals to exercise critical thinking and evaluate search results meticulously. A casual conversation with a chatbot may not always yield a balanced view, as the algorithms are likely to prioritize certain sources and perspectives. Furthermore, the ”best interests” of the public, as determined by the algorithms (in turn determined by interests that be), may not align with providing comprehensive and unbiased information. It is essential to be aware of these limitations and actively seek out diverse sources of information.
Policy recommendations
Enhanced transparency and disclosure
- Advocate for makers of Chatbots like ChatGPT, Bard, etc. to provide more transparency regarding the factors influencing ‘fact presentation’ and visibility of information.
- Encourage Chatbot makers to disclose potential conflicts of interest, sponsorships, or biases that may impact search results.
Promoting Critical Health Literacy
- Advocate for the integration of critical evaluation and information literacy skills into chatbot interactions, empowering individuals to critically assess online health information.
- Collaborate in the development of educational campaigns and resources that educate the public about biases in chatbot responses and strategies for effectively navigating and evaluating the information provided by chatbots.
Collaboration between Public Health Experts and Tech Companies
- Foster partnerships between public health experts and technology companies to ensure the development of search algorithms that prioritize the presentation of balanced, evidence-based information.
- Engage in ongoing dialogue to address concerns related to search result biases and work towards optimizing the retrieval of reliable health information.
Conclusions
Bias in public health-related conversations with chatbots such as ChatGPT or Bard or any emerging ones poses significant challenges to individuals seeking accurate and comprehensive information. By acknowledging the existence of biases and actively addressing them, we can foster a digital landscape that enables individuals to make informed decisions about their health. Through policy recommendations such as enhanced transparency, promoting critical health literacy, and collaboration between public health experts and tech companies, we can mitigate the impact of bias and ensure equitable access to reliable information. Empowered by critical evaluation skills, individuals can navigate public health internet searches with confidence, unveiling hidden truths and making informed choices that contribute to their overall well-being.
Declaration of Generative AI and AI-assisted technologies in the writing process
During the preparation of this work, the authors used grammar-checking tools and generative AI in order to improve the readability and organization of the content. After using this tool/service, the authors reviewed and edited the content as needed, and take full responsibility for the content presented.