FixFX

API References

Chat

AI-powered chat assistance for FiveM and RedM development.

The Chat API provides AI-powered assistance for FiveM and RedM development questions. Powered by advanced language models, it offers contextual help, code examples, and troubleshooting guidance.

Overview

The Chat API is an intelligent assistant specifically trained for CitizenFX development:

  • Expert Knowledge - Specialized in FiveM, RedM, and CitizenFX development
  • Code Generation - Creates working code examples and snippets
  • Troubleshooting - Helps debug issues and find solutions
  • Best Practices - Provides guidance on optimal development approaches
  • Framework Support - Covers ESX, QBCore, and other popular frameworks

The AI assistant can help with:

  • Resource development and structure
  • Native function usage and examples
  • Database integration and optimization
  • Performance optimization techniques
  • Security best practices
  • Framework-specific implementations

Base URL

https://fixfx.wiki/api/chat

Authentication

The Chat API requires proper authentication and respects rate limiting to ensure fair usage.

Endpoints

POST /api/chat

Sends a chat message to the AI assistant and receives a contextual response.

Request Body

{
  "messages": [
    {
      "role": "user",
      "content": "string"
    }
  ],
  "model": "string",
  "temperature": number,
  "stream": boolean,
  "context": {
    "framework": "string",
    "environment": "string",
    "topic": "string"
  }
}

Parameters

ParameterTypeDescriptionDefault
messagesarrayChat message historyRequired
modelstringAI model to use (gpt-4o-mini, claude-3-sonnet)gpt-4o-mini
temperaturenumberResponse creativity (0.0-1.0)0.7
streambooleanEnable streaming responsefalse
context.frameworkstringTarget framework (esx, qbcore, standalone)null
context.environmentstringEnvironment (client, server, shared)null
context.topicstringTopic area (development, troubleshooting, optimization)null

Response Format

{
  "response": "string",
  "model": "string",
  "usage": {
    "prompt_tokens": number,
    "completion_tokens": number,
    "total_tokens": number
  },
  "context": {
    "framework": "string",
    "environment": "string",
    "confidence": number,
    "sources": ["string"]
  },
  "suggestions": ["string"],
  "code_examples": [
    {
      "language": "string",
      "code": "string",
      "description": "string"
    }
  ]
}

Example Usage

// Basic chat interaction
async function askFixie(question, framework = null) {
  const response = await fetch('https://fixfx.wiki/api/chat', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      messages: [
        {
          role: 'user',
          content: question
        }
      ],
      model: 'gpt-4o-mini',
      temperature: 0.7,
      context: {
        framework: framework,
        environment: 'server',
        topic: 'development'
      }
    })
  });
  
  const data = await response.json();
  
  console.log('Fixie:', data.response);
  
  // Show code examples if provided
  if (data.code_examples && data.code_examples.length > 0) {
    console.log('\nCode Examples:');
    data.code_examples.forEach((example, index) => {
      console.log(`${index + 1}. ${example.description} (${example.language})`);
      console.log('```' + example.language);
      console.log(example.code);
      console.log('```');
    });
  }
  
  // Show suggestions
  if (data.suggestions && data.suggestions.length > 0) {
    console.log('\nSuggested follow-up questions:');
    data.suggestions.forEach((suggestion, index) => {
      console.log(`${index + 1}. ${suggestion}`);
    });
  }
  
  return data;
}
 
// Example questions
await askFixie("How do I create a vehicle spawning command in ESX?", "esx");
await askFixie("What's the best way to handle player data persistence?");
await askFixie("How can I optimize database queries in FiveM?");

Streaming Responses

For real-time interaction, enable streaming responses:

// Streaming chat interface
async function streamChat(question, onChunk, onComplete) {
  const response = await fetch('https://fixfx.wiki/api/chat', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      messages: [{ role: 'user', content: question }],
      stream: true,
      model: 'gpt-4o-mini'
    })
  });
  
  if (!response.body) {
    throw new Error('Streaming not supported');
  }
  
  const reader = response.body.getReader();
  const decoder = new TextDecoder();
  let buffer = '';
  let fullResponse = '';
  
  try {
    while (true) {
      const { done, value } = await reader.read();
      
      if (done) break;
      
      buffer += decoder.decode(value, { stream: true });
      const lines = buffer.split('\n');
      buffer = lines.pop() || '';
      
      for (const line of lines) {
        if (line.startsWith('data: ')) {
          const data = line.slice(6);
          
          if (data === '[DONE]') {
            onComplete(fullResponse);
            return;
          }
          
          try {
            const parsed = JSON.parse(data);
            if (parsed.choices?.[0]?.delta?.content) {
              const chunk = parsed.choices[0].delta.content;
              fullResponse += chunk;
              onChunk(chunk, fullResponse);
            }
          } catch (e) {
            // Skip invalid JSON
          }
        }
      }
    }
  } finally {
    reader.releaseLock();
  }
}
 
// Usage example
const chatOutput = document.getElementById('chat-output');
 
streamChat(
  "How do I implement a player teleport system?",
  (chunk, fullResponse) => {
    // Update UI with each chunk
    chatOutput.textContent = fullResponse;
  },
  (finalResponse) => {
    // Handle completion
    console.log('Chat completed:', finalResponse);
  }
);

Use Cases

Development Assistant

// Build a development assistant interface
class FixieDevelopmentAssistant {
  constructor() {
    this.chatHistory = [];
    this.currentContext = {
      framework: null,
      project: null,
      currentFile: null
    };
  }
  
  async askQuestion(question, additionalContext = {}) {
    const context = { ...this.currentContext, ...additionalContext };
    
    const response = await fetch('https://fixfx.wiki/api/chat', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        messages: [
          ...this.chatHistory,
          { role: 'user', content: question }
        ],
        context,
        model: 'gpt-4o-mini'
      })
    });
    
    const data = await response.json();
    
    // Update chat history
    this.chatHistory.push(
      { role: 'user', content: question },
      { role: 'assistant', content: data.response }
    );
    
    // Limit history size
    if (this.chatHistory.length > 20) {
      this.chatHistory = this.chatHistory.slice(-16);
    }
    
    return data;
  }
  
  async getCodeReview(code, language) {
    const question = `Please review this ${language} code and suggest improvements:\n\n\`\`\`${language}\n${code}\n\`\`\``;
    return this.askQuestion(question, { topic: 'code-review' });
  }
  
  async debugError(errorMessage, code) {
    const question = `I'm getting this error: "${errorMessage}"\n\nIn this code:\n\`\`\`lua\n${code}\n\`\`\`\n\nWhat's wrong and how can I fix it?`;
    return this.askQuestion(question, { topic: 'debugging' });
  }
  
  async generateCode(description, framework) {
    const question = `Generate ${framework} code for: ${description}`;
    return this.askQuestion(question, { 
      framework, 
      topic: 'code-generation' 
    });
  }
  
  async optimizePerformance(code, language) {
    const question = `How can I optimize this ${language} code for better performance?\n\n\`\`\`${language}\n${code}\n\`\`\``;
    return this.askQuestion(question, { topic: 'optimization' });
  }
  
  setContext(context) {
    this.currentContext = { ...this.currentContext, ...context };
  }
  
  clearHistory() {
    this.chatHistory = [];
  }
}
 
// Usage examples
const assistant = new FixieDevelopmentAssistant();
 
// Set project context
assistant.setContext({
  framework: 'esx',
  project: 'roleplay-server',
  environment: 'server'
});
 
// Ask general questions
const response1 = await assistant.askQuestion(
  "How do I create a custom job in ESX?"
);
 
// Get code review
const codeReview = await assistant.getCodeReview(`
RegisterCommand('heal', function(source, args)
    local xPlayer = ESX.GetPlayerFromId(source)
    if xPlayer.job.name == 'ambulance' then
        local ped = GetPlayerPed(source)
        SetEntityHealth(ped, 200)
    end
end)
`, 'lua');
 
// Debug an error
const debugHelp = await assistant.debugError(
  "attempt to index a nil value (global 'ESX')",
  "ESX.TriggerServerCallback('bank:withdraw', function(success) end)"
);
 
// Generate new code
const generatedCode = await assistant.generateCode(
  "a vehicle shop system with categories and test drives",
  "qbcore"
);

Interactive Tutorial System

// Create an interactive tutorial system
class InteractiveTutorial {
  constructor(topic, framework) {
    this.topic = topic;
    this.framework = framework;
    this.steps = [];
    this.currentStep = 0;
    this.assistant = new FixieDevelopmentAssistant();
    this.assistant.setContext({ framework, topic: 'tutorial' });
  }
  
  async startTutorial() {
    const response = await this.assistant.askQuestion(
      `Create a step-by-step tutorial for ${this.topic} in ${this.framework}. 
       Include code examples and explanations for each step.`
    );
    
    // Parse the response to extract steps
    this.parseSteps(response.response);
    return this.getCurrentStep();
  }
  
  parseSteps(tutorialText) {
    // Simple parsing - in practice, you'd want more sophisticated parsing
    const sections = tutorialText.split(/Step \d+:/);
    this.steps = sections.slice(1).map((step, index) => ({
      id: index + 1,
      title: `Step ${index + 1}`,
      content: step.trim(),
      completed: false
    }));
  }
  
  getCurrentStep() {
    return this.steps[this.currentStep] || null;
  }
  
  async nextStep() {
    if (this.currentStep < this.steps.length - 1) {
      this.steps[this.currentStep].completed = true;
      this.currentStep++;
      return this.getCurrentStep();
    }
    return null;
  }
  
  async askStepQuestion(question) {
    const currentStep = this.getCurrentStep();
    if (!currentStep) return null;
    
    const contextualQuestion = `
      I'm on step ${currentStep.id} of the ${this.topic} tutorial: "${currentStep.title}"
      
      My question: ${question}
    `;
    
    return this.assistant.askQuestion(contextualQuestion);
  }
  
  async validateCode(code) {
    const currentStep = this.getCurrentStep();
    if (!currentStep) return null;
    
    const validationRequest = `
      Please validate this code for step ${currentStep.id} of the ${this.topic} tutorial:
      
      \`\`\`lua
      ${code}
      \`\`\`
      
      Is this correct? What improvements can be made?
    `;
    
    return this.assistant.askQuestion(validationRequest);
  }
  
  getProgress() {
    const completed = this.steps.filter(step => step.completed).length;
    return {
      completed,
      total: this.steps.length,
      percentage: (completed / this.steps.length * 100).toFixed(1)
    };
  }
}
 
// Usage example
const tutorial = new InteractiveTutorial("vehicle spawning system", "esx");
 
// Start the tutorial
const firstStep = await tutorial.startTutorial();
console.log('First step:', firstStep);
 
// Ask questions during the tutorial
const clarification = await tutorial.askStepQuestion(
  "Where exactly do I put this code in my resource?"
);
 
// Validate user's code
const validation = await tutorial.validateCode(`
  RegisterCommand('spawncar', function(source, args)
    local model = args[1] or 'adder'
    local ped = GetPlayerPed(source)
    local coords = GetEntityCoords(ped)
    
    CreateVehicle(model, coords.x, coords.y, coords.z, 0.0, true, false)
  end)
`);
 
// Move to next step
const nextStep = await tutorial.nextStep();
console.log('Progress:', tutorial.getProgress());

AI Code Assistant Integration

// VSCode-like AI assistant integration
class AICodeAssistant {
  constructor() {
    this.activeFile = null;
    this.selectedCode = null;
    this.assistant = new FixieDevelopmentAssistant();
  }
  
  setActiveFile(filePath, content, framework) {
    this.activeFile = { filePath, content, framework };
    this.assistant.setContext({ 
      framework, 
      currentFile: filePath 
    });
  }
  
  setSelectedCode(code, lineStart, lineEnd) {
    this.selectedCode = { code, lineStart, lineEnd };
  }
  
  async explainCode() {
    if (!this.selectedCode) return null;
    
    const question = `
      Explain what this code does:
      
      \`\`\`lua
      ${this.selectedCode.code}
      \`\`\`
      
      Please explain it line by line if it's complex.
    `;
    
    return this.assistant.askQuestion(question);
  }
  
  async optimizeSelection() {
    if (!this.selectedCode) return null;
    
    const question = `
      Optimize this code for better performance and readability:
      
      \`\`\`lua
      ${this.selectedCode.code}
      \`\`\`
      
      Provide the optimized version with explanations.
    `;
    
    return this.assistant.askQuestion(question);
  }
  
  async findBugs() {
    if (!this.selectedCode) return null;
    
    const question = `
      Check this code for potential bugs and issues:
      
      \`\`\`lua
      ${this.selectedCode.code}
      \`\`\`
      
      List any problems and how to fix them.
    `;
    
    return this.assistant.askQuestion(question);
  }
  
  async generateDocumentation() {
    if (!this.selectedCode) return null;
    
    const question = `
      Generate documentation comments for this code:
      
      \`\`\`lua
      ${this.selectedCode.code}
      \`\`\`
      
      Include parameter descriptions and usage examples.
    `;
    
    return this.assistant.askQuestion(question);
  }
  
  async suggestImprovements() {
    if (!this.activeFile) return null;
    
    const question = `
      Analyze this ${this.activeFile.framework} resource file and suggest improvements:
      
      File: ${this.activeFile.filePath}
      
      \`\`\`lua
      ${this.activeFile.content}
      \`\`\`
      
      Focus on structure, performance, and best practices.
    `;
    
    return this.assistant.askQuestion(question);
  }
  
  async autocomplete(currentLine, cursorPosition) {
    const question = `
      Complete this line of ${this.activeFile?.framework || 'Lua'} code:
      
      "${currentLine}"
      
      Cursor is at position ${cursorPosition}. Suggest completions.
    `;
    
    return this.assistant.askQuestion(question);
  }
}
 
// Usage in editor
const codeAssistant = new AICodeAssistant();
 
// Set current file context
codeAssistant.setActiveFile(
  'server/main.lua',
  'RegisterCommand("givemoney", function(source, args)\n  local amount = tonumber(args[1])\nend)',
  'esx'
);
 
// Select code and get help
codeAssistant.setSelectedCode(
  'local amount = tonumber(args[1])',
  2,
  2
);
 
const explanation = await codeAssistant.explainCode();
const optimization = await codeAssistant.optimizeSelection();
const bugCheck = await codeAssistant.findBugs();
const docs = await codeAssistant.generateDocumentation();

Model Capabilities

Available Models

ModelStrengthsUse Cases
gpt-4o-miniFast, cost-effective, good for general questionsQuick help, code completion, basic debugging
claude-3-sonnetAdvanced reasoning, detailed explanationsComplex problem solving, code review, architecture
gpt-4-turboBalanced performance and capabilityMost development tasks, tutorials, optimization

Specialized Knowledge

The AI assistant has specialized knowledge in:

  • FiveM/RedM Development - Resource structure, manifests, natives
  • Framework Expertise - ESX, QBCore, vRP, and custom frameworks
  • Database Integration - MySQL, oxmysql, async patterns
  • Performance Optimization - Profiling, caching, efficient code patterns
  • Security Best Practices - Input validation, injection prevention
  • UI Development - NUI, HTML/CSS/JS integration
  • Networking - Events, callbacks, synchronization

Rate Limiting

  • 50 requests per hour for free users
  • 200 requests per hour for authenticated users
  • Streaming responses count as single requests
  • Rate limits reset every hour

Error Handling

Standard HTTP status codes:

  • 200: Success
  • 400: Bad Request - Invalid message format
  • 401: Unauthorized - Authentication required
  • 429: Too Many Requests - Rate limit exceeded
  • 500: Server Error - AI service error

Example error response:

{
  "error": "Rate limit exceeded",
  "message": "You have exceeded your hourly request limit",
  "statusCode": 429,
  "resetTime": "2024-07-24T15:30:00Z"
}

Best Practices

  1. Context Setting

    • Provide relevant framework and environment information
    • Include error messages and relevant code when debugging
    • Be specific about what you're trying to achieve
  2. Question Quality

    • Ask specific, focused questions
    • Include relevant code snippets
    • Mention your experience level for appropriate responses
  3. Code Integration

    • Always test AI-generated code before production use
    • Understand the code before implementing it
    • Adapt suggestions to your specific use case
  4. Performance

    • Use streaming for real-time interfaces
    • Cache responses for repeated questions
    • Implement proper error handling and fallbacks

Support

For questions about the Chat API, please join our Discord. We can help with:

  • API integration
  • Model selection
  • Best practices
  • Feature requests

The Chat API is continuously improved with better FiveM/RedM specific knowledge and capabilities.

Always review and test AI-generated code before using it in production environments.