Introduction
Spring AI simplifies the integration of AI functionalities into Spring applications. This tutorial will show you how to set up and use Spring AI to interact with Ollama's language models to generate text responses.
1. Setting Up the Project
Step 1: Create a New Spring Boot Project
Use Spring Initializr to create a new Spring Boot project. Include dependencies for Spring Web and Spring AI.
Using Spring Initializr:
- Go to start.spring.io
- Select:
- Project: Maven Project
- Language: Java
- Spring Boot: 3.0.0 (or latest)
- Dependencies: Spring Web
- Generate the project and unzip it.
Step 2: Add Spring AI Dependency
Add the spring-ai-openai-spring-boot-starter
dependency to your pom.xml
. Since there is no direct Spring AI integration for Ollama, we will treat it similarly to OpenAI for the purposes of this tutorial.
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
<version>1.0.0</version>
</dependency>
2. Configuring the Spring Boot Starter
Step 1: Add Ollama API Key and Endpoint to Configuration
Create a application.properties
or application.yml
file in your src/main/resources
directory and add your Ollama API key and endpoint.
For application.properties
:
ollama.api.key=your_ollama_api_key
ollama.endpoint=https://api.ollama.com/v1/engines
For application.yml
:
ollama:
api:
key: your_ollama_api_key
endpoint: https://api.ollama.com/v1/engines
Step 2: Create a Configuration Class
Create a configuration class to set up the Ollama client. Assuming Ollama has a similar client setup to OpenAI, we can configure it accordingly.
package com.example.demo.config;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.ai.openai.OpenAiClient;
@Configuration
public class OllamaConfig {
@Bean
public OpenAiClient openAiClient() {
return new OpenAiClient(
System.getenv("OLLAMA_API_KEY"),
System.getenv("OLLAMA_ENDPOINT")
);
}
}
3. Implementing the Ollama Integration
Example: Text Generation
Create a service to handle text completions using the Ollama API.
Service:
package com.example.demo.service;
import org.springframework.ai.openai.OpenAiClient;
import org.springframework.ai.openai.model.CompletionRequest;
import org.springframework.ai.openai.model.CompletionResponse;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
@Service
public class OllamaService {
@Autowired
private OpenAiClient openAiClient;
public String generateResponse(String prompt) {
CompletionRequest request = new CompletionRequest();
request.setPrompt(prompt);
request.setMaxTokens(150);
CompletionResponse response = openAiClient.createCompletion(request);
return response.getChoices().get(0).getText();
}
}
Controller:
package com.example.demo.controller;
import com.example.demo.service.OllamaService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class OllamaController {
@Autowired
private OllamaService ollamaService;
@GetMapping("/generateResponse")
public String generateResponse(@RequestParam String prompt) {
return ollamaService.generateResponse(prompt);
}
}
4. Creating a Simple Frontend
For demonstration purposes, we will create a simple HTML page that allows users to interact with Ollama.
Step 1: Create an HTML File
Create an index.html
file in the src/main/resources/static
directory.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Ollama Integration</title>
</head>
<body>
<h1>Ollama Integration</h1>
<div>
<textarea id="prompt" rows="4" cols="50" placeholder="Type your message here..."></textarea><br>
<button onclick="generateResponse()">Send</button>
</div>
<div id="response"></div>
<script>
function generateResponse() {
const prompt = document.getElementById('prompt').value;
fetch(`/generateResponse?prompt=${encodeURIComponent(prompt)}`)
.then(response => response.text())
.then(data => {
document.getElementById('response').innerText = data;
});
}
</script>
</body>
</html>
5. Testing the Integration
Step 1: Run the Application
Run your Spring Boot application. Ensure the application starts without errors.
Step 2: Access the Ollama Interface
Open your browser and navigate to http://localhost:8080
. You should see the simple chat interface. Type a message and click "Send" to interact with Ollama.
Conclusion
This tutorial demonstrated how to set up and integrate Ollama with a Spring Boot application using Spring AI. You learned how to create a service and controller for generating responses using Ollama, and how to create a simple frontend to interact with the AI model. This setup provides a foundation for building more complex and feature-rich AI applications.
Explore further customization and enhancements to leverage the full potential of Ollama in your Spring Boot projects.
Comments
Post a Comment