Generated by author using DALL-E
Generated by the author using DALL-E3

Mental Health Chatbot using Llama3 and Langchain

Pooja Chandrashekara
4 min readApr 27, 2024

--

Introduction
Solving real-world problems has changed and is made easy with transformers and Generative AI.

So what is generative AI — the buzzword that is becoming revolutionized? Generative AI is nothing but generating new data based on training samples. The new data can be text or an image.
There are two types — generative language models and generative image models.
For today, let’s talk about generative language models. Generative AI’s base model is a transformer with an encoder, decoder, and attention layer. The primary step in generative AI is to generate based on clustering or unsupervised learning, then discrimination or classification, and finally, regression or supervised learning.

What is LLM?
Simply put, a Large Language Model is trained on a deep learning model that understands and generates text in a human-like fashion.

Let’s implement a small project on mental health chatbot using Langchain, Llama3, and Ollama to understand today’s LLM.

  1. Langchain: Langchain is a framework used to develop applications powered by LLM’s. Learn more
  2. Llama3: Llama3 is Meta’s latest large language model, available to use with 8B and 70B parameters. Explore Llama3
  3. Ollama: Ollama is an open-source application that runs, shares, and creates large language models on your local computer. Visit Ollama

Now that you understand LLM and Generative AI, let’s implement this small project on mental health chatbots.

Image source: https://python.langchain.com/docs/use_cases/chatbots/

Step 1: Setup
First, set up your computer with Visual Studio, install the required packages, download Ollama https://ollama.com/ from here, and pull the llama3 model on your computer.

ollama pull llama3

Step 2: Load the dataset
Let’s load the data for our mental health chatbot. I trained the model using the World Health Organisation’s guidelines data.
1. Data Loader: Langchain provides different data loaders, including RAG( Retrieval Augmented Generation ).
1. a. Let’s load the data from the URL. I am going to use:
https://www.who.int/news-room/fact-sheets/detail/mental-disorders https://www.who.int/news-room/questions-and-answers/item/stress
To load the data from the URL -

from langchain_community.document_loaders import SeleniumURLLoader
urls = [
"https://www.who.int/news-room/questions-and-answers/item/stress",
"https://www.who.int/news-room/fact-sheets/detail/mental-disorders",
]
loader = SeleniumURLLoader(urls=urls)
data = loader.load()
  1. b. To load single pdf data:

from langchain_community.document_loaders import UnstructuredPDFLoader
loader = UnstructuredPDFLoader("your_pdf") # I am using mental health guidelines from WHO
data = loader.load()

Step3: Load the model and fine-tune the parameters

from langchain_core.messages import HumanMessage, SystemMessage

chain = ChatOllama(model="llama3", format="json", temperature=1, document_loader=loader)
messages = [
SystemMessage(content="you are a helpful AI assistant expert in mental health information from World Health Organisation"),
HumanMessage (content = input("Enter your question to understand the mental health plans and programs initiated by WHO: "))
]#input your question related to the pdf of your data

Here, the temperature parameter ranges from 0–2.
Higher temperature (e.g., 1–2): The model will generate more random and diverse responses. It’s like the model is more “adventurous” and willing to try out new combinations of words.
Lower temperature (e.g., 0–0.5): The model will generate more conservative and predictable responses. It is more “cautious” and sticks to what it knows works well.

Step 4: Invoke chat

chat_model_response = chain.invoke(messages)
print(chat_model_response)
# For a more readable answer
text = chat_model_response.content.strip()
print(text)

Results
Question: Based on the evidence synthesis and evidence-to-decision frameworks, give the five GDG-developed recommendations for adolescent mental health promotion and prevention interventions.

Output:
"1. Implement universal school-based interventions: The GDG recommends implementing universal school-based interventions that aim to promote positive mental health and reduce mental health problems among adolescents. These interventions can include mental health education, skills-building programs, and social-emotional learning activities.
2. Provide early intervention and support: The GDG suggests providing early intervention and support to adolescents who are experiencing mental health problems or who are at risk of developing them. This can involve identifying risk factors and providing targeted interventions, such as cognitive-behavioral therapy or counseling before the problem escalates.
3. Foster positive relationships: The GDG recommends fostering positive relationships between adolescents and their parents, peers, and other significant adults in their lives. Positive relationships can provide a supportive environment for adolescents to develop and maintain good mental health.
4. Promote physical activity and exercise: The GDG suggests promoting physical activity and exercise as a way to improve mental health outcomes among adolescents. Regular exercise has been shown to reduce symptoms of anxiety and depression, improve mood, and enhance overall well-being.
5. Address social determinants of health: The GDG recommends addressing the social determinants of health that can impact mental health among adolescents, such as poverty, discrimination, and social isolation. By addressing these factors, it is possible to create a more supportive environment for adolescent mental health."

References:

https://python.langchain.com/docs/get_started/introduction/
https://www.who.int/news-room/fact-sheets/detail/mental-disorders
https://ollama.com/
https://llama.meta.com/
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct

Feel Free to check the code on GitHub.

Thank you for reading!

--

--

Pooja Chandrashekara
Pooja Chandrashekara

Written by Pooja Chandrashekara

Grad student in Data Science at GWU, aspiring data scientist exploring deep learning's power to solve real-world problems. Join my journey on Medium!