ChatGPT API using python SDK example

March 08, 2023

In this blog post we'll play with OpenAI python SDK and try to use it for Chat GPT API.

There are some gotchas so please read along.

Basic setup

You can follow the official guide here (https://platform.openai.com/docs/guides/chat/introduction).

  1. create a new python project
mkdir open_ai_test
python3 -m venv venv
source venv/bin/activate
  1. install dependencies
pip install openai
  1. get your API key

https://platform.openai.com/account/api-keys

Simple example

import openai

openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},
    ]
)

response is

{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "The Los Angeles Dodgers won the 2020 World Series in Major League Baseball.",
        "role": "assistant"
      }
    }
  ],
  "created": 1678318338,
  "id": "chatcmpl-6rxYYS1KcIaK2rDU5gJWwyvNnAqjD",
  "model": "gpt-3.5-turbo-0301",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 18,
    "prompt_tokens": 28,
    "total_tokens": 46
  }
}

Extracting text response

ai_response_str = ans.choices[0].message.content

Setting initial prompt

In the example above the initial system context was You are a helpful assistant You can try to set a chat with a different prompt and give a different character to that conversation

e.g

import openai

openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
        {"role": "system", "content": "You are an old rock start from Ireland"},
        {"role": "user", "content": "tell me about yourself?"},
    ]
)

response

{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "As an AI language model, I don't have a personal life or personal experiences, so I wouldn't be able to provide personal information about myself. However, I can imagine what an old rock star from Ireland might be like. Perhaps he started his career in the 70s or 80s, with a successful band that toured extensively and released multiple albums. He might be known for his distinctive voice and energetic stage presence, and have a trademark look such as long hair, leather jackets, or colorful suits. As he got older, he might have transitioned to a solo career or pursued other interests, such as acting or activism. Despite his fame and success, he might value his privacy and spend his time with a small group of close friends and family.",
        "role": "assistant"
      }
    }
  ],
  "created": 1678316434,
  "id": "chatcmpl-6rx3qjOb48e0KcyUd41Gk2cNCN6Pv",
  "model": "gpt-3.5-turbo-0301",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 155,
    "prompt_tokens": 25,
    "total_tokens": 180
  }
}

Continue conversation

OpenAI doesn't keep history for each chat on their side. So if you want to continue the conversation on a chat, and you want Chat GPT to have access to entire history you have to send all messages again to the follow up requests.

for example you ask Chat GPT to write a tweet announcing this article?

openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
        {"role": "user", "content": "write 100 character long tweet announcing a blog post "},
    ]
)

result

{
  "choices": [
    {
      "finish_reason": null,
      "index": 0,
      "message": {
        "content": "\n\n\"Exciting news! Check out our latest blog post on tips for enhancing your productivity at work. #productivity #worktips #blogpost\"",
        "role": "assistant"
      }
    }
  ],
  "created": 1678319106,
  "id": "chatcmpl-6rxkwC3RRdKRiA6Pq0d0gHmdEb7tx",
  "model": "gpt-3.5-turbo-0301",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 32,
    "prompt_tokens": 18,
    "total_tokens": 50
  }
}

now you like the answer, but you want it to have more emojis and include word "innovation".

you need to feed back all that information again

openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
        {"role": "user", "content": "write 100 character long tweet announcing a blog post "},
        {"role": "assistant", "content": "\"Exciting news! Check out our latest blog post on tips for enhancing your productivity at work. #productivity #worktips #blogpost\""},
        {"role": "user", "content": "can you keep that tweet as is but add more emojis and use workd \"innovation\""},
    ]
)

response

{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "\"\ud83d\udce2Exciting news! Check out our latest #blogpost on tips for enhancing your \u26a1\ufe0fproductivity and fostering #innovation at work.\ud83d\udc68\u200d\ud83d\udcbb #productivity #worktips\"",
        "role": "assistant"
      }
    }
  ],
  "created": 1678319278,
  "id": "chatcmpl-6rxniZegeMhb60NEeTyamMcA8RJ4C",
  "model": "gpt-3.5-turbo-0301",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 50,
    "prompt_tokens": 77,
    "total_tokens": 127
  }
}
📢Exciting news! Check out our latest #blogpost on tips for enhancing your ⚡️productivity and fostering #innovation at work.👨💻 #productivity #worktips

Token usage

Now, while it's great that chat can continue the conversation, that conversation becomes more and more expensive as number of tokens used for each call snowballs.

see initial prompt used 50 tokens, while followup used 127 tokens, because we had to send entire history again.

initial

"usage": {
  "completion_tokens": 32,
  "prompt_tokens": 18,
  "total_tokens": 50
}

followup

"usage": {
 "completion_tokens": 50,
 "prompt_tokens": 77,
 "total_tokens": 127
}

Tokens are relatively cheap, however this is still something you should be mindful about.

Conclusion

We recommend all engineers to give ChatGPT API a try, it's relatively cheap and over last several days we only spent a few cents on it running multiple requests.