Setting Up Your Development Environment
Before diving into the code, it's essential to have a well-prepared development environment. This includes installing the necessary libraries and creating the project structure. Let's start by creating a new Python file, backend.py
, which will house the core logic of our application. Within this file, we'll begin by importing the required modules. We need to install Flask using the command pipenv install flask
, and we will also need to install Langchain and OpenAI's API. Setting up the environment is crucial for ensuring that our application runs smoothly and efficiently. Creating a server.py
is also essential to building web applications.
We will first create new file call it backend.py
. Then, we need to import the .env
file to protect the API keys. Then, we need to import Prompt template and the LLM model. We are using OpenAI to create the Chatbot.
Here's the code for backend:
from dotenv import load_dotenv
load_dotenv()
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.output_parsers import CommaSeparatedListOutputParser
These imports provide access to environment variables, LLMs, prompt engineering tools, and output parsing functionalities within the Langchain framework. Now, you're set to go and create a fully functioning chatbot!
Defining the LLM Model and Prompt Template
With our environment set up, let's define the LLM model we'll be using for our application. The following code instantiates an OpenAI LLM with a specified temperature and model name. Prompt engineering, achieved through PromptTemplate
, helps to structure the interaction with the LLM to ensure that the AI model knows what task it should perform. This step is where we tell the LLM what kind of expert it is.
Here's the code that will get the foundation of the chatbot up and running:
llm = OpenAI(temperature=1, model_name="text-davinci-003")
prompt_template = PromptTemplate(
template="You are SEO expert having 10 years of experience. Suggest me 3 SEO optimized .com domain names for my blog in niche {niche}",
input_variables=["niche"]
)
The temperature parameter controls the randomness of the LLM's output, while the model name specifies the specific OpenAI model to use. The PromptTemplate
is constructed with a template STRING and input variables, allowing for dynamic prompt generation. Proper setup of the prompt template is crucial for directing the LLM to provide Relevant and useful output. For SEO, it is important to specify parameters to create great content that gets clicks.
Parsing the LLM Output
To ensure that the output from the LLM is structured in a way that our application can readily utilize, we'll employ an output parser. Here, we're using the CommaSeparatedListOutputParser
to parse the LLM's response into a list of comma-separated items. This allows you to make sure you structure the date from the LLM for what you need to accomplish.
output_parser = CommaSeparatedListOutputParser()
Output parsing is essential for transforming the raw output from the LLM into a structured format, making it easier to work with within our application.
Creating the Web Server with Flask
Now, let's set up our Flask web server, which will serve as the entry point for users to interact with our AI-powered application. The following code creates a Flask app and defines a route for handling user input. We'll extract the user-provided 'niche' from the request arguments and pass it to our LLM for domain name suggestions.
from flask import Flask, request, jsonify, render_template
app = Flask(__name__)
@app.route('/')
def home():
return render_template("home.html")
@app.route('/chat')
def chat():
niche = request.args.get("input")
query = prompt_template.format(niche=niche)
response = llm(query)
return jsonify(response)
if __name__ == '__main__':
app.run()
This code sets up a basic GET endpoint using Flask. When a user accesses the '/chat' route with a query string, the application extracts the 'niche', uses it to format the prompt template, queries the LLM, and then returns the LLM's response as a JSON object. Additionally, Flask's render_template
enables us to render a HTML page which will be our landing page, which we will code in the next step.
Designing the Front-End Interface
To make our application user-friendly, let's create a simple HTML form that allows users to input their desired niche and submit it to the server.
This part of the tutorial is not clearly defined, but from looking at what the code does it can be inferred that there is indeed a front end element of the app to allow for user interraction.
To accomplish the goals we need to create: 1) a text field with the id input
2) generate Domain Names that will have the id 'out. When a user interacts with the app, loading shows to indicate to the user that the result will eventually be shown.