LLM

Learn Langchain

Langchain is the framework that lets developer working with AI and it s machine,learning subset combine large language models with other external components to develop LLM-powered applications.

Langchain provides a simple interface to interact with pre-trained LLMs from providers like OpenAI,Cohere,Huggingface e.tc

 

Langchain is a framework that simplifies the process of creating generative AI application interface.

it enables application that:

  • Are context-aware: connect a language model to source of context(prompt instruction,few shot eg: content to ground its response in ,etc)
  • Reason: Rely  on a language model to reason (about how to answers based on provided context , what actions take e.t.c)

Langchain was launched as an opensource project by cofounder harrison chase and Ankush Galain in 2022.

Application

Langchain can build chatbots or question-answering system by integrating LLM. eg: hugging face adn openAI with the data source or stores such as APIfy actors,google search and wikipedia.

  • Customer support chatbot
  • personal assistant
  • Document summarization
  • Data analyze
  • QA over Documents
  • DNA research
  • Evaluation
  • Healthcare
  • E-commerce

 

Langchain Libraries

As a framework , it allows developers to easily interact their AI applications into web application.

it contains interface and integrations for a myraid of components, a basic runtime for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.

hugging face

The platform where the machine learning community collaborates on models, datasets, and applications.

Acts as as home of machine learning. ( 300k + model data frame some paid , some free)

when we use any model we might need hugging face for access token.

openai

  • huge model to update'
  • to provide the accurate result
  • optional(of openai--google gemini)

Now create account on  and generate the API Key as we are going to used it in Langchain (it provide the such structure which code doesnot change from model to model. Today we learn langchain using openai model, tomorrow we can used same model with google gamini model as well).

# pip install openai
from openai import OpenAI
from dotenv import load_dotenv
import os
api_key = os.getenv("API_KEY")
client = OpenAI(
  api_key=api_key
)

completion = client.chat.completions.create(
  model="gpt-4o-mini",
  store=True,
  messages=[
    {"role": "user", "content": "Write a Nepali Poem"}
  ]
)

print(completion.choices[0].message.content);

 

More about the openai

from typing import Dict, List
from openai import AsyncOpenAI
import asyncio
OPENAI_API_KEY=''
OPENAI_MODEL=''
class OpenAI:
    def __init__(self,model,openai_api_key) -> None:
        self.model = model
        self.client = AsyncOpenAI(api_key=openai_api_key,)

    async def generate_text(self, prompt=List[Dict]) -> str|None:
        try:
            response = await self.client.chat.completions.create(model=self.model,messages=prompt,)
            return response.choices[0].message.content
        except Exception as e:
            raise e

    async def audio_to_text(self,audio_fp,speech_to_text_model,) -> str|None:
        """
        Converts audio file to text using OpenAI's Whisper model.
        :param audio_fp: File path of the audio file.
        :return: Transcribed text or None if an error occurs.
        """
        try:
            # audio_content = await audio_file.read()
            response = await self.client.audio.transcriptions.create(model=speech_to_text_model,file=audio_fp,)
            return response.text
        except Exception as e:
            return None

player = "Messi"
messages = [
    {
        "role": "system",
        "content": f"You are a professional sports journalist who provides detailed and accurate information about football players. Focus on giving verified and up-to-date information about {player}.",
    },
    {
        "role": "user",
        "content": f"Tell me everything I should know about {player}, including career highlights, achievements, personal life, and recent news.",
    },
]

abc=OpenAI(OPENAI_MODEL,OPENAI_API_KEY)
response = asyncio.run(abc.generate_text(messages))
print(response)

Azure openai

from typing import Dict, List, Optional
from openai.lib.azure import AsyncAzureOpenAI
class AzureOpenAI:
    def __init__(self,azure_openai_api_key,azure_openai_api_endpoint,openai_api_version,) -> None:
        self.client = AsyncAzureOpenAI(azure_endpoint=azure_openai_api_endpoint,api_key=azure_openai_api_key,api_version=openai_api_version)

    async def generate_text(self, model: str, prompt=List[Dict[str, str]]) -> Optional[str]:
        try:
            response = await self.client.chat.completions.create(model=model,messages=prompt)
            return response.choices[0].message.content
        except Exception as e:
            raise e

 

Python setup and installation

pip install langchain

 

LLM and Chat Models with Langchain 

There are two type of model LLMs(gpt-3: input string ==> output string) and chat Models(gpt-4)

Text Completion model:

  • it takes list of message as a input and return a message.
  • There are few different type of messages (AI message,Human Message ,System Message(read as happy ,sad)
  • All message have a role and a content property.
    • role: it define who is saying message.
    • content: it define content of message.

       

 

 


About author

author image

Amrit panta

Fullstack developer, content creator



Scroll to Top