For an introduction for what AI agents are what they can do, refer to this page.

Prerequisite

  1. Scoopika account. create one here.
  2. Scoopika access token. generate one here.

Create an AI agent

Once you’ve logged in, you can create a new agent here.

  • Name: The name of the agent. agents are aware of their own names and their companions names.
  • Description: Briefly describe the agent. notice that this is not the agent’s prompt, and only useful when using companions.
  • Avatar: The agent’s avatar. only useful to display it.
  • LLM: The large language model that powers the agent (its brain).
  • Prompt: A set of instructions that the agent will act and behave based on.
  • Voice: Select your agent’s voice. it will be used in voice-based interaction (we provide 2 voices options for now).

Selecting the LLM

LLMs are the underlying technology powering your agents. You can choose from a list of available LLMs and providers. Ensure you provide the corresponding API keys in your code for the chosen LLM providers when initializing a Scoopika instance (see how).

API keys are securely kept on your servers and never shared with our platform unless you add them to your account (optional choice).

Run your agent

Before moving to your app, you can test your agent in the playground (sepports both text and voice interaction).

FUN FACT

The playground was built using Scoopika itself (with Scoopika React library on the front-end).

Scoopika agents are designed to run on the server side. We offer packages for both the server and client, supporting features like real-time streaming hooks and client-side actions out of the box.

Web Usage (server-client)

To run agents on the client-side, you first need to add a route to your API that can handle everything related to Scoopika, and then be able to run the agents on the client-side. Steps:

  1. Scoopika for the web guide: Setup your server-side and client-side.
  2. Agents client-side usage page: Learn how to use agents on client-side.

Server-side Usage

First install the server-side package:

npm install @scoopika/scoopika

Then initializing a Scoopika instance, the agent, and run it:

import { Scoopika, Agent } from "@scoopika/scoopika";

const scoopika = new Scoopika({
	token: "YOUR_SCOOPIKA_TOKEN",
	engines: {
		openai: "YOUR_OPENAI_KEY" // Replace based on the LLMs providers you're using
	}
});

const agent = new Agent("AGENT_ID", scoopika);

(async () => {
	const response = await agent.run({
		inputs: { message: "Hello!" }
	});

	console.log(response);
})();

Further Exploration: For comprehensive server-side documentation, refer to:

Utilizing Streaming Hooks (Optional):

Streaming hooks allow you to capture specific events during the agent execution process. Learn more about hooks in the API Reference.

Example with hooks:

await agent.run({
	inputs: { message: "Hello!" },
	hooks: {
		onToken: (token) => console.log(token), // Capture every token the agent outputs
		onAgentResponse: (response) => console.log(response), // Capture the agent final response
		onError: (err) => console.log(err.error); // Capture errors
	}
});

Was this page helpful?