fluxion_ai.core.modules.llm_modules module

class fluxion_ai.core.modules.llm_modules.DeepSeekR1ChatModule(*args, remove_thinking_tag_content: bool = True, **kwargs)[source]

Bases: LLMChatModule

A class to handle chatting with the deepseek-r1 model’s via REST API. R1 models’ output contains extra information for thinking process. The thinking process is enclosed in a <think> </think> tag.

DeepSeekR1ChatModule: example-usage:

from fluxion_ai.modules.llm_query_module import DeepSeekR1ChatModule

# Initialize the DeepSeekR1ChatModule
llm_module = DeepSeekR1ChatModule(endpoint="http://localhost:11434/api/chat", model="deepseekr1")

# Chat with the DeepSeekR1 model
response = llm_module.chat(messages= [{
    "role": "user",
    "content": "Hello!"
    },
    {
    "role": "assistant",
    "content": "Hello, how can I help you?"
    }]
)

print(response)
get_input_params(*args, messages, tools={}, **kwargs)[source]

Get the input parameters for the LLM chat.

Parameters:

messages (List[str]) – The messages to chat with the LLM.

Returns:

The input parameters for the LLM chat.

Return type:

Dict[str, str]

post_process(response, full_response=False)[source]

Post-process the API response.

Parameters:
  • response (Dict[str, Any]) – The raw response from the API.

  • full_response (bool) – Whether to return the full response or a processed subset.

Returns:

The processed response data.

Return type:

Dict[str, Any]

remove_thinking(content: str) str[source]

Remove the thinking process from the response.

Parameters:

content (str) – The content to process.

Returns:

The content with the thinking process removed.

Return type:

str

class fluxion_ai.core.modules.llm_modules.DeepSeekR1QueryModule(*args, remove_thinking_tag_content: bool = True, **kwargs)[source]

Bases: LLMQueryModule

A class to handle querying the deepseek-r1 model’s via REST API. R1 models’ output contains extra information for thinking process. The thinking process is enclosed in a <think> </think> tag.

DeepSeekR1QueryModule: example-usage:

from fluxion_ai.modules.llm_query_module import DeepSeekR1QueryModule

# Initialize the DeepSeekR1QueryModule
llm_module = DeepSeekR1QueryModule(endpoint="http://localhost:11434/api/generate", model="deepseekr1")

# Query the DeepSeekR1 model
response = llm_module.query(prompt="What is the capital of France?")
print(response)
post_process(response, full_response=False)[source]

Post-process the API response.

Parameters:
  • response (Dict[str, Any]) – The raw response from the API.

  • full_response (bool) – Whether to return the full response or a processed subset.

Returns:

The processed response data.

Return type:

Dict[str, Any]

remove_thinking(content: str) str[source]

Remove the thinking process from the response.

Parameters:

content (str) – The content to process.

Returns:

The content with the thinking process removed.

Return type:

str

class fluxion_ai.core.modules.llm_modules.LLMApiModule(endpoint: str, model: str = None, headers: Dict[str, Any] = {}, timeout: int = 10, response_key: str = 'response', temperature: float | None = None, seed: int | None = None, streaming: bool = False)[source]

Bases: ApiModule, ABC

Provides an interface for interacting with a locally hosted LLM via REST API.

This class abstracts common patterns for interacting with an LLM via REST API.

execute(*args, **kwargs) Dict[str, Any][source]

Execute the LLM module.

Parameters:
  • *args – Variable length argument list.

  • **kwargs – Arbitrary keyword arguments.

Returns:

The response from the LLM.

Return type:

Dict[str, Any]

get_input_params(*args, **kwargs) Dict[str, Any][source]

Get the input parameters for the LLM module.

Parameters:
  • *args – Variable length argument list.

  • **kwargs – Arbitrary keyword arguments.

Returns:

The input parameters for the LLM module.

Return type:

Dict[str, Any]

get_response(data, full_response=False) Dict[str, Any][source]

Send a POST request to the API endpoint and return the response.

Parameters:
  • data (Dict[str, str]) – The data to send in the POST request.

  • full_response (bool) – Whether to return the full response or a processed subset.

Returns:

The parsed JSON response from the API.

Return type:

Dict[str, Any]

post_process(response: Dict[str, Any], full_response: bool = False)[source]

Post-process the API response.

Parameters:
  • response (Dict[str, Any]) – The raw response from the API.

  • full_response (bool) – Whether to return the full response or a processed subset.

Returns:

The processed response data.

Return type:

Dict[str, Any]

class fluxion_ai.core.modules.llm_modules.LLMChatModule(*args, response_key: str = 'message', **kwargs)[source]

Bases: LLMApiModule

A class to handle chatting with an LLM via REST API.

LLMChatModule: example-usage:

from fluxion_ai.modules.llm_query_module import LLMChatModule

# Initialize the LLMChatModule
llm_module = LLMChatModule(endpoint="http://localhost:11434/api/chat", model="llama3.2")

# Chat with the LLM
response = llm_module.chat(messages= [{
    "role": "user",
    "content": "Hello!"
    },
    {
    "role": "assistant",
    "content": "Hello, how can I help you?"
    }]
)

print(response)
get_input_params(*args, messages: List[str], tools: List[Dict[str, str]] = {}, **kwargs) Dict[str, Any][source]

Get the input parameters for the LLM chat.

Parameters:

messages (List[str]) – The messages to chat with the LLM.

Returns:

The input parameters for the LLM chat.

Return type:

Dict[str, str]

post_process(response, full_response=False)[source]

Post-process the API response.

Parameters:
  • response (Dict[str, Any]) – The raw response from the API.

  • full_response (bool) – Whether to return the full response or a processed subset.

Returns:

The processed response data.

Return type:

Dict[str, Any]

class fluxion_ai.core.modules.llm_modules.LLMQueryModule(endpoint: str, model: str = None, headers: Dict[str, Any] = {}, timeout: int = 10, response_key: str = 'response', temperature: float | None = None, seed: int | None = None, streaming: bool = False)[source]

Bases: LLMApiModule

A class to handle querying an LLM via REST API.

This class abstracts common patterns for querying an LLM via REST API.

LLMQueryModule: example-usage:

from fluxion_ai.modules.llm_query_module import LLMQueryModule

# Initialize the LLMQueryModule
llm_module = LLMQueryModule(endpoint="http://localhost:11434/api/generate", model="llama3.2")

# Query the LLM
response = llm_module.query(prompt="What is the capital of France?")
print(response)
get_input_params(prompt: str, **kwargs) Dict[str, str][source]

Get the input parameters for the LLM query.

Parameters:

prompt (str) – The prompt for the LLM.

Returns:

The input parameters for the LLM query.

Return type:

Dict[str, str]

post_process(response: str | Dict[str, Any], full_response=False)[source]

Post-process the API response.

Parameters:
  • response (Dict[str, Any]) – The raw response from the API.

  • full_response (bool) – Whether to return the full response or a processed subset.

Returns:

The processed response data.

Return type:

Dict[str, Any]