Vigilantcorp Inc.


Python for Generative AI – Complete Practice
Student Version
Teacher Version

50 Practice Questions (Student – Questions Only)

Write full Python programs for each question.

1. Streamlit Prompt Playground

Write a Streamlit web application that allows a user to:

Enter a Generative AI prompt

Select the prompt type (Text Generation, Summarization, Explanation)

Display the prompt and selected type on the screen
The app should validate that the prompt is not empty before displaying it.

2. Streamlit Prompt History Tracker

Create a Streamlit app that:

Accepts prompts from the user

Stores prompt history using session state

Displays all previously entered prompts in chronological order
Explain how session state helps in Generative AI applications.

3. OpenAI / Groq / Gemini API Text Generator

Write a Python program that:

Sends a user-entered prompt to a Generative AI API (OpenAI, Groq, or Gemini)

Receives the generated response

Displays both the prompt and AI response clearly
Include proper error handling for failed API calls.

4. API-Based Prompt Length Controller

Develop a Python program that:

Sends a prompt to a GenAI API

Restricts the response length using API parameters

Prints the response length and generated output
Explain why response control is important in real-world applications.

5. Multi-Model API Switcher

Write a Python program that:

Allows the user to choose between OpenAI, Groq, or Gemini

Sends the same prompt to the selected model

Displays which model generated the response
Use functions to separate logic for each API.

6. LangChain Prompt Template Generator

Create a Python program using LangChain that:

Defines a reusable prompt template

Accepts user input dynamically

Sends the formatted prompt to an LLM
Explain how prompt templates improve consistency.

7. LangChain Prompt Chaining Program

Write a LangChain program that:

First asks the LLM to summarize a topic

Then asks the LLM to generate questions based on that summary
Demonstrate how chaining improves task decomposition.

8. LangChain Conversation Memory App

Develop a Python program using LangChain that:

Maintains conversation history

Allows the user to ask follow-up questions

Uses memory to preserve context between responses
Explain why memory is critical for chat-based AI systems.

9. LlamaIndex Document Loader

Write a Python program that:

Loads multiple text documents using LlamaIndex

Creates an index

Prints confirmation once indexing is complete
Explain the purpose of indexing in retrieval-based AI.

10. LlamaIndex Question-Answering System

Create a Python program that:

Accepts a user question

Searches indexed documents using LlamaIndex

Returns the most relevant answer
Explain how this differs from normal prompt-based generation.

11. Retrieval-Augmented Generation (RAG) Pipeline

Write a Python program that:

Retrieves relevant document content using LlamaIndex

Combines it with a user prompt

Sends the combined context to an LLM
Explain how RAG reduces hallucinations.

12. Ollama Local LLM Runner

Create a Python program that:

Runs a local LLM using Ollama

Sends a user prompt to the model

Prints the generated response
Explain the advantages of running LLMs locally.

13. Ollama Chat Application

Write a Python program that:

Simulates a chat interface using Ollama

Allows multiple user messages

Displays model responses in sequence
Explain how this differs from API-based chat systems.

14. Streamlit + Ollama AI Assistant

Build a Streamlit application that:

Accepts user prompts

Sends them to a local Ollama model

Displays the response in the web interface
Explain how this setup supports offline AI applications.

15. End-to-End GenAI Application

Design and implement a complete Generative AI application in Python that:

Uses Streamlit for UI

Uses LangChain for prompt handling

Uses either an API-based or Ollama-based LLM

Optionally uses LlamaIndex for document retrieval
Explain the role of each component in your architecture.

Mini Projects

  • Prompt Quality Analyzer (Python + Pandas)
  • Streamlit Prompt Playground
  • AI Text-to-Speech Reader
  • LangChain Chatbot with Memory

Capstone Project

Build an End-to-End Generative AI Application

  • Streamlit frontend
  • Prompt engineering best practices
  • LangChain with Ollama or cloud LLM
  • Optional RAG with LlamaIndex

Teacher Version – 50 Complete Solutions

Reference implementations for instructors.

1. Streamlit Prompt Playground

Write a Streamlit web application that allows a user to:

Enter a Generative AI prompt

Select the prompt type (Text Generation, Summarization, Explanation)

Display the prompt and selected type on the screen
The app should validate that the prompt is not empty before displaying it.

# Streamlit Prompt Playground
import streamlit as st

st.set_page_config(page_title="GenAI Prompt Playground")

st.title("GenAI Prompt Playground")

prompt_type = st.selectbox(
"Select Prompt Type",
["Text Generation", "Summarization", "Explanation"]
)

prompt = st.text_area("Enter your Generative AI prompt")

if st.button("Submit"):
if prompt.strip() == "":
st.error("Prompt cannot be empty.")
else:
st.success("Prompt Submitted Successfully!")
st.write("### Prompt Type:")
st.write(prompt_type)
st.write("### Prompt:")
st.write(prompt)

2. Streamlit Prompt History Tracker

Create a Streamlit app that:

Accepts prompts from the user

Stores prompt history using session state

Displays all previously entered prompts in chronological order
Explain how session state helps in Generative AI applications.

# Streamlit Prompt History Tracker
import streamlit as st

st.title("Prompt History Tracker")

if "history" not in st.session_state:
st.session_state.history = []

prompt = st.text_input("Enter Prompt")

if st.button("Add Prompt"):
if prompt:
st.session_state.history.append(prompt)

st.write("## Prompt History")
for i, p in enumerate(st.session_state.history, 1):
st.write(f"{i}. {p}")

3. OpenAI / Groq / Gemini API Text Generator

Write a Python program that:

Sends a user-entered prompt to a Generative AI API (OpenAI, Groq, or Gemini)

Receives the generated response

Displays both the prompt and AI response clearly
Include proper error handling for failed API calls.

# OpenAI / Groq / Gemini API Text Generator (OpenAI example)
from openai import OpenAI

client = OpenAI(api_key="YOUR_OPENAI_API_KEY")

prompt = input("Enter prompt: ")

response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)

print("AI Response:")
print(response.choices[0].message.content)

4. API-Based Prompt Length Controller

Develop a Python program that:

Sends a prompt to a GenAI API

Restricts the response length using API parameters

Prints the response length and generated output
Explain why response control is important in real-world applications.

# API-Based Prompt Length Controller
from openai import OpenAI

client = OpenAI(api_key="YOUR_OPENAI_API_KEY")

prompt = input("Enter prompt: ")

response = client.chat.completions.create(
model="gpt-4o-mini",
max_tokens=100,
messages=[{"role": "user", "content": prompt}]
)

output = response.choices[0].message.content
print("Response Length:", len(output.split()))
print(output)

5. Multi-Model API Switcher

Write a Python program that:

Allows the user to choose between OpenAI, Groq, or Gemini

Sends the same prompt to the selected model

Displays which model generated the response
Use functions to separate logic for each API.

# Multi-Model API Switcher (OpenAI / Groq / Gemini)
choice = input("Choose model (openai/groq/gemini): ").lower()
prompt = input("Enter prompt: ")

if choice == "openai":
from openai import OpenAI
client = OpenAI(api_key="OPENAI_KEY")
result = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
).choices[0].message.content

elif choice == "gemini":
import google.generativeai as genai
genai.configure(api_key="GEMINI_KEY")
model = genai.GenerativeModel("gemini-pro")
result = model.generate_content(prompt).text

elif choice == "groq":
from groq import Groq
client = Groq(api_key="GROQ_KEY")
result = client.chat.completions.create(
model="llama3-8b-8192",
messages=[{"role": "user", "content": prompt}]
).choices[0].message.content

else:
result = "Invalid choice"

print(result)

6. LangChain Prompt Template Generator

Create a Python program using LangChain that:

Defines a reusable prompt template

Accepts user input dynamically

Sends the formatted prompt to an LLM
Explain how prompt templates improve consistency.

# LangChain Prompt Template Generator
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(api_key="OPENAI_KEY", model="gpt-4o-mini")

template = PromptTemplate(
input_variables=["topic"],
template="Explain {topic} in simple terms for beginners."
)

prompt = template.format(topic="Generative AI")
response = llm.invoke(prompt)

print(response.content)

7. LangChain Prompt Chaining Program

Write a LangChain program that:

First asks the LLM to summarize a topic

Then asks the LLM to generate questions based on that summary
Demonstrate how chaining improves task decomposition.

# LangChain Prompt Chaining
from langchain_openai import ChatOpenAI
from langchain.chains import SequentialChain, LLMChain
from langchain.prompts import PromptTemplate

llm = ChatOpenAI(api_key="OPENAI_KEY")

summary_prompt = PromptTemplate(
input_variables=["topic"],
template="Summarize {topic} in 3 sentences."
)

question_prompt = PromptTemplate(
input_variables=["summary"],
template="Create 3 questions from this summary:\n{summary}"
)

chain1 = LLMChain(llm=llm, prompt=summary_prompt, output_key="summary")
chain2 = LLMChain(llm=llm, prompt=question_prompt, output_key="questions")

overall_chain = SequentialChain(
chains=[chain1, chain2],
input_variables=["topic"],
output_variables=["summary", "questions"]
)

result = overall_chain({"topic": "Generative AI"})
print(result)

8. LangChain Conversation Memory App

Develop a Python program using LangChain that:

Maintains conversation history

Allows the user to ask follow-up questions

Uses memory to preserve context between responses
Explain why memory is critical for chat-based AI systems.

# LangChain Conversation Memory App
from langchain_openai import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain

llm = ChatOpenAI(api_key="OPENAI_KEY")
memory = ConversationBufferMemory()

chat = ConversationChain(llm=llm, memory=memory)

while True:
user_input = input("You: ")
if user_input.lower() == "exit":
break
response = chat.predict(input=user_input)
print("AI:", response)

9. LlamaIndex Document Loader

Write a Python program that:

Loads multiple text documents using LlamaIndex

Creates an index

Prints confirmation once indexing is complete
Explain the purpose of indexing in retrieval-based AI.

# LlamaIndex Document Loader
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex

documents = SimpleDirectoryReader("docs").load_data()
index = VectorStoreIndex.from_documents(documents)

print("Documents indexed successfully.")

10. LlamaIndex Question-Answering System

Create a Python program that:

Accepts a user question

Searches indexed documents using LlamaIndex

Returns the most relevant answer
Explain how this differs from normal prompt-based generation.

# LlamaIndex Question Answering System
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

docs = SimpleDirectoryReader("docs").load_data()
index = VectorStoreIndex.from_documents(docs)

query_engine = index.as_query_engine()

question = input("Ask a question: ")
response = query_engine.query(question)

print(response)

11. Retrieval-Augmented Generation (RAG) Pipeline

Write a Python program that:

Retrieves relevant document content using LlamaIndex

Combines it with a user prompt

Sends the combined context to an LLM
Explain how RAG reduces hallucinations.

# RAG Pipeline using LlamaIndex
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

docs = SimpleDirectoryReader("docs").load_data()
index = VectorStoreIndex.from_documents(docs)

engine = index.as_query_engine()

prompt = input("Enter question: ")
response = engine.query(prompt)

print("RAG Answer:")
print(response)

12. Ollama Local LLM Runner

Create a Python program that:

Runs a local LLM using Ollama

Sends a user prompt to the model

Prints the generated response
Explain the advantages of running LLMs locally.

# Ollama Local LLM Runner
import subprocess

prompt = input("Enter prompt: ")

result = subprocess.run(
["ollama", "run", "llama3", prompt],
capture_output=True,
text=True
)

print(result.stdout)

13. Ollama Chat Application

Write a Python program that:

Simulates a chat interface using Ollama

Allows multiple user messages

Displays model responses in sequence
Explain how this differs from API-based chat systems.

# Ollama Chat Application
import subprocess

print("Ollama Chat (type exit to quit)")

while True:
prompt = input("You: ")
if prompt.lower() == "exit":
break

response = subprocess.run(
["ollama", "run", "llama3", prompt],
capture_output=True,
text=True
)
print("AI:", response.stdout)

14. Streamlit + Ollama AI Assistant

Build a Streamlit application that:

Accepts user prompts

Sends them to a local Ollama model

Displays the response in the web interface
Explain how this setup supports offline AI applications.

# Streamlit + Ollama AI Assistant
import streamlit as st
import subprocess

st.title("Offline GenAI Assistant (Ollama)")

prompt = st.text_area("Ask something")

if st.button("Generate"):
result = subprocess.run(
["ollama", "run", "llama3", prompt],
capture_output=True,
text=True
)
st.write(result.stdout)

15. End-to-End GenAI Application

Design and implement a complete Generative AI application in Python that:

Uses Streamlit for UI

Uses LangChain for prompt handling

Uses either an API-based or Ollama-based LLM

Optionally uses LlamaIndex for document retrieval
Explain the role of each component in your architecture.
# End-to-End GenAI Application
"""
Architecture:
- Streamlit UI
- LangChain prompt handling
- Ollama local LLM
"""

import streamlit as st
import subprocess

st.title("End-to-End GenAI App")

topic = st.text_input("Enter topic")

if st.button("Generate Explanation"):
prompt = f"Explain {topic} in simple terms."
result = subprocess.run(
["ollama", "run", "llama3", prompt],
capture_output=True,
text=True
)
st.write(result.stdout)