Spring AI: Chat Model API

The Spring AI Chat Model API is a powerful abstraction that allows developers to interact with large language models (LLMs) such as OpenAI's GPT-3 and GPT-4 in a structured and flexible manner. This tutorial will guide you through setting up and using the Chat Model API in a Spring Boot application, covering key features and configurations.

Key Features of the Chat Model API

  1. Prompt Handling: The API provides methods to create and send prompts to the AI models, receiving and processing their responses.
  2. Configuration Options: You can customize various aspects of the model's behavior, including temperature, maximum tokens, and more.
  3. Function Calling: The API supports calling custom functions based on the AI's response.
  4. Streaming Responses: The API can handle asynchronous responses for real-time applications.

Setting Up the Project

Step 1: Create a New Spring Boot Project

Use Spring Initializr to create a new Spring Boot project. Include dependencies for Spring Web and Spring AI.

Using Spring Initializr:

  • Go to start.spring.io
  • Select:
    • Project: Maven Project
    • Language: Java
    • Spring Boot: 3.0.0 (or latest)
    • Dependencies: Spring Web, Spring AI
  • Generate the project and unzip it.

Step 2: Add Spring AI Dependency

Add the spring-ai-openai-spring-boot-starter dependency to your pom.xml:

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-openai-spring-boot-starter</artifactId>
    <version>1.0.0</version>
</dependency>

Step 3: Configure API Key

Add your OpenAI API key to application.properties or application.yml:

openai.api.key=your_openai_api_key

Step 4: Create Configuration Class

Set up the OpenAI client in a configuration class:

package com.example.demo.config;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.ai.openai.OpenAiClient;
import org.springframework.ai.openai.ChatClient;
import org.springframework.ai.openai.OpenAiChatClient;

@Configuration
public class OpenAiConfig {

    @Bean
    public OpenAiClient openAiClient() {
        return new OpenAiClient();
    }

    @Bean
    public ChatClient chatClient(OpenAiClient openAiClient) {
        return new OpenAiChatClient(openAiClient);
    }
}

Using the Chat Model API

Example 1: Basic Text Completion

Create a service to handle text completions using the Chat Model API.

Service:

package com.example.demo.service;

import org.springframework.ai.openai.ChatClient;
import org.springframework.ai.openai.model.ChatRequest;
import org.springframework.ai.openai.model.ChatResponse;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

@Service
public class ChatService {

    @Autowired
    private ChatClient chatClient;

    public String getCompletion(String prompt) {
        ChatRequest request = new ChatRequest();
        request.setPrompt(prompt);

        ChatResponse response = chatClient.call(request);
        return response.getContent();
    }
}

Controller:

package com.example.demo.controller;

import com.example.demo.service.ChatService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class ChatController {

    @Autowired
    private ChatService chatService;

    @GetMapping("/chat")
    public String chat(@RequestParam String prompt) {
        return chatService.getCompletion(prompt);
    }
}

Example 2: Advanced Configuration

Customize the behavior of the AI model by adjusting parameters such as temperature and maximum tokens.

Service with Custom Options:

package com.example.demo.service;

import org.springframework.ai.openai.ChatClient;
import org.springframework.ai.openai.model.ChatRequest;
import org.springframework.ai.openai.model.ChatResponse;
import org.springframework.ai.openai.options.ChatOptions;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

@Service
public class CustomChatService {

    @Autowired
    private ChatClient chatClient;

    public String getCustomCompletion(String prompt) {
        ChatOptions options = ChatOptions.builder()
                .temperature(0.7)
                .maxTokens(150)
                .build();

        ChatRequest request = new ChatRequest();
        request.setPrompt(prompt);
        request.setOptions(options);

        ChatResponse response = chatClient.call(request);
        return response.getContent();
    }
}

Example 3: Streaming Responses

Handle asynchronous responses using the streaming capabilities of the Chat Model API.

Service for Streaming Responses:

package com.example.demo.service;

import org.springframework.ai.openai.ChatClient;
import org.springframework.ai.openai.model.ChatRequest;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import reactor.core.publisher.Flux;

@Service
public class StreamingChatService {

    @Autowired
    private ChatClient chatClient;

    public Flux<String> getStreamingCompletion(String prompt) {
        ChatRequest request = new ChatRequest();
        request.setPrompt(prompt);

        return chatClient.stream(request).content();
    }
}

Controller for Streaming Responses:

package com.example.demo.controller;

import com.example.demo.service.StreamingChatService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import reactor.core.publisher.Flux;

@RestController
public class StreamingChatController {

    @Autowired
    private StreamingChatService streamingChatService;

    @GetMapping("/streamChat")
    public Flux<String> streamChat(@RequestParam String prompt) {
        return streamingChatService.getStreamingCompletion(prompt);
    }
}

Conclusion

This tutorial introduced the key features and concepts of the Spring AI Chat Model API, illustrating how to set up and use this API in a Spring Boot application. By leveraging the Chat Model API, you can integrate powerful AI capabilities into your applications, enabling advanced text generation, customized model behavior, and real-time streaming responses.

For more detailed information, refer to the Spring AI documentation.


Comments