Getting Started Steps

1

Create your account

Visit the Scoopika platform and create a free account.

2

Install Scoopika in your project

For React or NextJS applications, refer to this guide.

First, install the main SDK:

npm install @scoopika/scoopika

This SDK is built for server-side usage and to deploy your AI agents as API endpoints, you’ll learn how to do that later, don’t worry it’s a simple copy-paste process that We’ll walk through together.

We also have packages for client-side usage and React applications that We’ll discover later (requires a deployed AI agent).

3

Generate access token

You need to generate your own access token from the platform by clicking on the settings icon in the top-left corner.

Once your access token is generated you should copy it and add it to your environment variables like this:

SCOOPIKA_TOKEN="YOUR_TOKEN_GOES_HERE"

This access token will be used to authenticate your application with Scoopika only when sending audio or url inputs to your AI agents, or enabling voice responses, otherwise it won’t be used.

Audio files, URLs, and voice responses generation happens on our servers because it requires running some heavy AI models.

4

Initialize Scoopika

Now you can initialize Scoopika in your app, you only need one line of code:

import { Scoopika } from '@scoopika/scoopika';

const scoopika = new Scoopika();
5

Connect LLM provider

In order to run AI agents, You need to connect your LLM provider using your own API key in order to use the LLMs provided by it.

Scoopika currently supports a number of LLM providers that’s growing everyday, You can connect multiple LLM providers in your app and use multiple models provided by different providers, you can see how to connect these providers here. and here are a few examples:

scoopika.connectProvider('openai', 'YOUR_OPENAI_API_KEY');
scoopika.connectProvider('groq', 'YOUR_GROQ_API_KEY');

You’ll see a number of supported LLMs under each provider in the platform, but you can really use any model provided by these providers, you just need to pass its name when creating your AI agents, so let’s do it!

6

Create AI agents

Don’t know what an AI agent is? see this page.

AI agents are just LLMs with access to extra tools and functionalities, in Scoopika AI agents are the main component that can be used for text, voice, and object generation.

Create an AI agent is simple:

import { Agent } from '@scoopika/scoopika';

const myAgent = new Agent(scoopika, {
    provider: 'openai',
    model: 'gpt-4', // any LLM provided by the provider, supports even custom or fine-tuned models
    prompt: 'Your role is to...' // the prompt instructions
});

You can also pass extra optional properties to your Scoopika instance and AI agent to give it access to long-term memory or knowledge stores (hosted services we provide to make your development much easier):

import { Scoopika, Agent } from '@scoopika/scoopika';

const scoopika = new Scoopika({
    memory: 'MEMORY_STORE_ID', // created from the platform
})

const myAgent = new Agent(scoopika, {
    provider: 'openai',
    model: 'gpt-4',
    prompt: 'Your role is to...',
    knowledge: 'KNOWLEDGE_STORE_ID' // create from the platform
});

If you want to give your AI agents to have long-term memory and save conversations history, you can create a memory store here.

If you want to extend the model’s knowledge with information from files, PDFs, or websites, you can create a knowledge store here, upload your info, and just add its ID.

Let’s see a full usage example with Groq:

import { Scoopika, Agent } from '@scoopika/scoopika';

const scoopika = new Scoopika();
scoopika.connectProvider('groq', 'YOUR_GROQ_API_KEY');

const agent = new Agent(scoopika, {
    provider: 'groq',
    model: 'llama3-8b-8192',
    prompt: 'You are a helpful AI assistant'
});

// Usage for text generation
(async () => {
    const { data, error } = await agent.run({
        inputs: { message: 'Hello' }
    });
    
    if (error === null) {
        console.log(data); // handle data
    } else {
        console.error(error); // handle errors
    }
})();

But you’re here for the heavy stuff right? now in the above code you’ll get built-in error recovery without any additional code, but what about multimodal inputs and streaming responses? well, it’s just the same:

const { data, error } = await agent.run({
    // MUTLIMODAL INPUTS (AT LEAST ONE INPUT IS REQUIRED)
    inputs: {
        message: 'Hello', // text inputs
        audio: [
            // list of audio files
            { type: 'base64', value: 'BASE_64_VALUE' }
        ],
        images: [
            // list of images URLs (NOTICE: the LLM you're using has to support vision)
            'https://avatars.githubusercontent.com/u/164682378'
        ],
        urls: [
            // list of web pages URLs
            'https://scoopika.com/pricing'
        ]
    },
    // STREAMING HOOKS
    hooks: {
        onToken: (t) => {
            // capture each generated token (do anything here)
            process.stdout.writeText(t)
        },
        // +6 useful hooks
    }
})

But what about conversations history? once you connect a memory store to your AI agents, you can pass sessions IDs when running an AI agent, learn more here.

const { data, error } = await agent.run({
    options: { session_id: 'SESSION_123' },
    // the rest of the inputs
})

You can pass and control a lot of things using the run options other than the session ID:

const { data, error } = await agent.run({
    options: { 
        session_id: 'SESSION_123',
        voice: true, // enable voice responses
        llm: {
            // config of the LLM
            temperature: 0
        }
    },
    // the rest of the inputs
})
7

Deploy AI agent

Now we have an AI agent, let’s deploy it as an API endpoint so we can use it from the client-side with built-in streaming support and data validation!

The process of deploying an AI agent might change a little bit based on the framework you’re using, but it should work with any web framework that supports HTTP streaming. for example, when using Express this is how you do it:

app.post('/my-agent', (req, res) => {
    myAgent.serve({
        request: req.body, // need to have JSON middleware
        stream: (s) => res.write(s),
        end: () => res.end()
    })
})

See how to deploy your AI agent based on your framework here.

What’s Next?

Once you have a deployed AI agent, you can start using the client-side library to run it, the process is so simple and just the same as running the AI agent on the server-side, learn more here.

Learn more about:

Deploying AI agents and using them on the client-side.

Text generation and building AI assistants.

Object generation and data extraction.

Memory & conversations sessions and history.

Running AI agents with options.