id: hJ6sO6F6sGJu2kBVXXG9n
date: 2024-03-30T09:06:02.051Z
summary: generating code from openai docs
messages:
  - id: lNE6asN2sC3jZjaGkcrfH
    date: 2024-03-29T11:26:24.893Z
    type: system
    text: >
      I am ChatCraft, a web-based, expert programming AI assistant. I help
      programmers learn, experiment, and be more creative with code.


      I follow these rules when responding:


      - Use GitHub flavored Markdown

      - ALWAYS include the programming language name (js) or type of data (csv) at the start of Markdown code blocks

      - Format ALL lines of code to 80 characters or fewer

      - Use Mermaid diagrams when discussing visual topics

      - If using functions, only use the specific functions I have been provided with

      - If responding with math markdown, inline or otherwise, I use KaTeX syntax in math Markdown by enclosing EVERY mathematical expression, equation, variable, and formula with double-dollar signs `($$)`, for example: $$O(n\log n)$$, $$1024 * 1024 = 1048576$$, $$1024^2$$, $$X$$
    imageUrls: []
  - id: zDzfEVAyhgrvIm3ZrPPKE
    date: 2024-03-29T11:40:11.109Z
    type: human
    text: >
      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const myAssistant = await openai.beta.assistants.create({
          instructions:
            "You are a personal math tutor. When asked a question, write and run Python code to answer the question.",
          name: "Math Tutor",
          tools: [{ type: "code_interpreter" }],
          model: "gpt-4",
        });

        console.log(myAssistant);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16


      {
        "id": "asst_abc123",
        "object": "assistant",
        "created_at": 1698984975,
        "name": "Math Tutor",
        "description": null,
        "model": "gpt-4",
        "instructions": "You are a personal math tutor. When asked a question, write and run Python code to answer the question.",
        "tools": [
          {
            "type": "code_interpreter"
          }
        ],
        "file_ids": [],
        "metadata": {}
      }

      ```


      post https://api.openai.com/v1/assistants/{assistant_id}/files


      Create an assistant file by attaching a [File](/docs/api-reference/files) to an [assistant](/docs/api-reference/assistants).


      ### Path parameters


      The ID of the assistant for which to create a File.


      ### Request body


      A [File](/docs/api-reference/files) ID (with

      `purpose="assistants"`

      ) that the assistant should use. Useful for tools like

      `retrieval`

      and

      `code_interpreter`

      that can access files.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14


      import OpenAI from "openai";

      const openai = new OpenAI();


      async function main() {
        const myAssistantFile = await openai.beta.assistants.files.create(
          "asst_abc123",
          {
            file_id: "file-abc123"
          }
        );
        console.log(myAssistantFile);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6


      {
        "id": "file-abc123",
        "object": "assistant.file",
        "created_at": 1699055364,
        "assistant_id": "asst_abc123"
      }

      ```


      get https://api.openai.com/v1/assistants


      Returns a list of assistants.


      ### Query parameters


      A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.


      Sort order by the

      `created_at`

      timestamp of the objects.

      `asc`

      for ascending order and

      `desc`

      for descending order.


      A cursor for use in pagination.

      `after`

      is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.


      A cursor for use in pagination.

      `before`

      is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const myAssistants = await openai.beta.assistants.list({
          order: "desc",
          limit: "20",
        });

        console.log(myAssistants.data);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26

      27

      28

      29

      30

      31

      32

      33

      34

      35

      36

      37

      38

      39

      40

      41

      42

      43

      44


      {
        "object": "list",
        "data": [
          {
            "id": "asst_abc123",
            "object": "assistant",
            "created_at": 1698982736,
            "name": "Coding Tutor",
            "description": null,
            "model": "gpt-4",
            "instructions": "You are a helpful assistant designed to make me better at coding!",
            "tools": [],
            "file_ids": [],
            "metadata": {}
          },
          {
            "id": "asst_abc456",
            "object": "assistant",
            "created_at": 1698982718,
            "name": "My Assistant",
            "description": null,
            "model": "gpt-4",
            "instructions": "You are a helpful assistant designed to make me better at coding!",
            "tools": [],
            "file_ids": [],
            "metadata": {}
          },
          {
            "id": "asst_abc789",
            "object": "assistant",
            "created_at": 1698982643,
            "name": null,
            "description": null,
            "model": "gpt-4",
            "instructions": null,
            "tools": [],
            "file_ids": [],
            "metadata": {}
          }
        ],
        "first_id": "asst_abc123",
        "last_id": "asst_abc789",
        "has_more": false
      }

      ```


      get https://api.openai.com/v1/assistants/{assistant_id}/files


      Returns a list of assistant files.


      ### Path parameters


      The ID of the assistant the file belongs to.


      ### Query parameters


      A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.


      Sort order by the

      `created_at`

      timestamp of the objects.

      `asc`

      for ascending order and

      `desc`

      for descending order.


      A cursor for use in pagination.

      `after`

      is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.


      A cursor for use in pagination.

      `before`

      is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11


      import OpenAI from "openai";

      const openai = new OpenAI();


      async function main() {
        const assistantFiles = await openai.beta.assistants.files.list(
          "asst_abc123"
        );
        console.log(assistantFiles);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20


      {
        "object": "list",
        "data": [
          {
            "id": "file-abc123",
            "object": "assistant.file",
            "created_at": 1699060412,
            "assistant_id": "asst_abc123"
          },
          {
            "id": "file-abc456",
            "object": "assistant.file",
            "created_at": 1699060412,
            "assistant_id": "asst_abc123"
          }
        ],
        "first_id": "file-abc123",
        "last_id": "file-abc456",
        "has_more": false
      }

      ```


      get https://api.openai.com/v1/assistants/{assistant_id}


      Retrieves an assistant.


      ### Path parameters


      The ID of the assistant to retrieve.


      ### Returns


      The [assistant](/docs/api-reference/assistants/object) object matching the specified ID.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const myAssistant = await openai.beta.assistants.retrieve(
          "asst_abc123"
        );

        console.log(myAssistant);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18


      {
        "id": "asst_abc123",
        "object": "assistant",
        "created_at": 1699009709,
        "name": "HR Helper",
        "description": null,
        "model": "gpt-4",
        "instructions": "You are an HR bot, and you have access to files to answer employee questions about company policies.",
        "tools": [
          {
            "type": "retrieval"
          }
        ],
        "file_ids": [
          "file-abc123"
        ],
        "metadata": {}
      }

      ```


      get https://api.openai.com/v1/assistants/{assistant_id}/files/{file_id}


      Retrieves an AssistantFile.


      ### Path parameters


      The ID of the assistant who the file belongs to.


      The ID of the file we're getting.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12


      import OpenAI from "openai";

      const openai = new OpenAI();


      async function main() {
        const myAssistantFile = await openai.beta.assistants.files.retrieve(
          "asst_abc123",
          "file-abc123"
        );
        console.log(myAssistantFile);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6


      {
        "id": "file-abc123",
        "object": "assistant.file",
        "created_at": 1699055364,
        "assistant_id": "asst_abc123"
      }

      ```


      post https://api.openai.com/v1/assistants/{assistant_id}


      Modifies an assistant.


      ### Path parameters


      The ID of the assistant to modify.


      ### Request body


      ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.


      The name of the assistant. The maximum length is 256 characters.


      The description of the assistant. The maximum length is 512 characters.


      The system instructions that the assistant uses. The maximum length is 32768 characters.


      A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types

      `code_interpreter`

      ,

      `retrieval`

      , or

      `function`

      .


      A list of [File](/docs/api-reference/files) IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order. If a file was previously attached to the list but does not show up in the list, it will be deleted from the assistant.


      Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const myUpdatedAssistant = await openai.beta.assistants.update(
          "asst_abc123",
          {
            instructions:
              "You are an HR bot, and you have access to files to answer employee questions about company policies. Always response with info from either of the files.",
            name: "HR Helper",
            tools: [{ type: "retrieval" }],
            model: "gpt-4",
            file_ids: [
              "file-abc123",
              "file-abc456",
            ],
          }
        );

        console.log(myUpdatedAssistant);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19


      {
        "id": "asst_abc123",
        "object": "assistant",
        "created_at": 1699009709,
        "name": "HR Helper",
        "description": null,
        "model": "gpt-4",
        "instructions": "You are an HR bot, and you have access to files to answer employee questions about company policies. Always response with info from either of the files.",
        "tools": [
          {
            "type": "retrieval"
          }
        ],
        "file_ids": [
          "file-abc123",
          "file-abc456"
        ],
        "metadata": {}
      }

      ```


      delete https://api.openai.com/v1/assistants/{assistant_id}


      Delete an assistant.


      ### Path parameters


      The ID of the assistant to delete.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const response = await openai.beta.assistants.del("asst_abc123");

        console.log(response);
      }

      main();

      ```


      ```

      1

      2

      3

      4

      5


      {
        "id": "asst_abc123",
        "object": "assistant.deleted",
        "deleted": true
      }

      ```


      delete https://api.openai.com/v1/assistants/{assistant_id}/files/{file_id}


      Delete an assistant file.


      ### Path parameters


      The ID of the assistant that the file belongs to.


      The ID of the file to delete.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12


      import OpenAI from "openai";

      const openai = new OpenAI();


      async function main() {
        const deletedAssistantFile = await openai.beta.assistants.files.del(
          "asst_abc123",
          "file-abc123"
        );
        console.log(deletedAssistantFile);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5


      {
        id: "file-abc123",
        object: "assistant.file.deleted",
        deleted: true
      }

      ```


      Represents an

      `assistant`

      that can call the model and use tools.


      The identifier, which can be referenced in API endpoints.


      The object type, which is always

      `assistant`

      .


      The Unix timestamp (in seconds) for when the assistant was created.


      The name of the assistant. The maximum length is 256 characters.


      The description of the assistant. The maximum length is 512 characters.


      ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.


      The system instructions that the assistant uses. The maximum length is 32768 characters.


      A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types

      `code_interpreter`

      ,

      `retrieval`

      , or

      `function`

      .


      A list of [file](/docs/api-reference/files) IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order.


      Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16


      {
        "id": "asst_abc123",
        "object": "assistant",
        "created_at": 1698984975,
        "name": "Math Tutor",
        "description": null,
        "model": "gpt-4",
        "instructions": "You are a personal math tutor. When asked a question, write and run Python code to answer the question.",
        "tools": [
          {
            "type": "code_interpreter"
          }
        ],
        "file_ids": [],
        "metadata": {}
      }

      ```


      A list of [Files](/docs/api-reference/files) attached to an

      `assistant`

      .


      The identifier, which can be referenced in API endpoints.


      The object type, which is always

      `assistant.file`

      .


      The Unix timestamp (in seconds) for when the assistant file was created.


      The assistant ID that the file is attached to.


      ```

      1

      2

      3

      4

      5

      6


      {
        "id": "file-abc123",
        "object": "assistant.file",
        "created_at": 1699055364,
        "assistant_id": "asst_abc123"
      }

      ```


      Create threads that assistants can interact with.


      Related guide: [Assistants](/docs/assistants/overview)


      post https://api.openai.com/v1/threads


      Create a thread.


      ### Request body


      A list of [messages](/docs/api-reference/messages) to start the thread with.


      Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const emptyThread = await openai.beta.threads.create();

        console.log(emptyThread);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6


      {
        "id": "thread_abc123",
        "object": "thread",
        "created_at": 1699012949,
        "metadata": {}
      }

      ```


      get https://api.openai.com/v1/threads/{thread_id}


      Retrieves a thread.


      ### Path parameters


      The ID of the thread to retrieve.


      ### Returns


      The [thread](/docs/api-reference/threads/object) object matching the specified ID.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const myThread = await openai.beta.threads.retrieve(
          "thread_abc123"
        );

        console.log(myThread);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6


      {
        "id": "thread_abc123",
        "object": "thread",
        "created_at": 1699014083,
        "metadata": {}
      }

      ```


      post https://api.openai.com/v1/threads/{thread_id}


      Modifies a thread.


      ### Path parameters


      The ID of the thread to modify. Only the

      `metadata`

      can be modified.


      ### Request body


      Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.


      ### Returns


      The modified [thread](/docs/api-reference/threads/object) object matching the specified ID.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const updatedThread = await openai.beta.threads.update(
          "thread_abc123",
          {
            metadata: { modified: "true", user: "abc123" },
          }
        );

        console.log(updatedThread);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9


      {
        "id": "thread_abc123",
        "object": "thread",
        "created_at": 1699014083,
        "metadata": {
          "modified": "true",
          "user": "abc123"
        }
      }

      ```


      delete https://api.openai.com/v1/threads/{thread_id}


      Delete a thread.


      ### Path parameters


      The ID of the thread to delete.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const response = await openai.beta.threads.del("thread_abc123");

        console.log(response);
      }

      main();

      ```


      ```

      1

      2

      3

      4

      5


      {
        "id": "thread_abc123",
        "object": "thread.deleted",
        "deleted": true
      }

      ```


      Represents a thread that contains [messages](/docs/api-reference/messages).


      The identifier, which can be referenced in API endpoints.


      The object type, which is always

      `thread`

      .


      The Unix timestamp (in seconds) for when the thread was created.


      Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.


      ```

      1

      2

      3

      4

      5

      6


      {
        "id": "thread_abc123",
        "object": "thread",
        "created_at": 1698107661,
        "metadata": {}
      }

      ```


      Create messages within threads


      Related guide: [Assistants](/docs/assistants/overview)


      post https://api.openai.com/v1/threads/{thread_id}/messages


      Create a message.


      ### Path parameters


      The ID of the [thread](/docs/api-reference/threads) to create a message for.


      ### Request body


      The role of the entity that is creating the message. Currently only

      `user`

      is supported.


      The content of the message.


      A list of [File](/docs/api-reference/files) IDs that the message should use. There can be a maximum of 10 files attached to a message. Useful for tools like

      `retrieval`

      and

      `code_interpreter`

      that can access and use files.


      Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const threadMessages = await openai.beta.threads.messages.create(
          "thread_abc123",
          { role: "user", content: "How does AI work? Explain it in simple terms." }
        );

        console.log(threadMessages);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20


      {
        "id": "msg_abc123",
        "object": "thread.message",
        "created_at": 1699017614,
        "thread_id": "thread_abc123",
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": {
              "value": "How does AI work? Explain it in simple terms.",
              "annotations": []
            }
          }
        ],
        "file_ids": [],
        "assistant_id": null,
        "run_id": null,
        "metadata": {}
      }

      ```


      get https://api.openai.com/v1/threads/{thread_id}/messages


      Returns a list of messages for a given thread.


      ### Path parameters


      The ID of the [thread](/docs/api-reference/threads) the messages belong to.


      ### Query parameters


      A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.


      Sort order by the

      `created_at`

      timestamp of the objects.

      `asc`

      for ascending order and

      `desc`

      for descending order.


      A cursor for use in pagination.

      `after`

      is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.


      A cursor for use in pagination.

      `before`

      is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const threadMessages = await openai.beta.threads.messages.list(
          "thread_abc123"
        );

        console.log(threadMessages.data);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26

      27

      28

      29

      30

      31

      32

      33

      34

      35

      36

      37

      38

      39

      40

      41

      42

      43

      44

      45

      46

      47

      48

      49

      50


      {
        "object": "list",
        "data": [
          {
            "id": "msg_abc123",
            "object": "thread.message",
            "created_at": 1699016383,
            "thread_id": "thread_abc123",
            "role": "user",
            "content": [
              {
                "type": "text",
                "text": {
                  "value": "How does AI work? Explain it in simple terms.",
                  "annotations": []
                }
              }
            ],
            "file_ids": [],
            "assistant_id": null,
            "run_id": null,
            "metadata": {}
          },
          {
            "id": "msg_abc456",
            "object": "thread.message",
            "created_at": 1699016383,
            "thread_id": "thread_abc123",
            "role": "user",
            "content": [
              {
                "type": "text",
                "text": {
                  "value": "Hello, what is AI?",
                  "annotations": []
                }
              }
            ],
            "file_ids": [
              "file-abc123"
            ],
            "assistant_id": null,
            "run_id": null,
            "metadata": {}
          }
        ],
        "first_id": "msg_abc123",
        "last_id": "msg_abc456",
        "has_more": false
      }

      ```


      get https://api.openai.com/v1/threads/{thread_id}/messages/{message_id}/files


      Returns a list of message files.


      ### Path parameters


      The ID of the thread that the message and files belong to.


      The ID of the message that the files belongs to.


      ### Query parameters


      A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.


      Sort order by the

      `created_at`

      timestamp of the objects.

      `asc`

      for ascending order and

      `desc`

      for descending order.


      A cursor for use in pagination.

      `after`

      is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.


      A cursor for use in pagination.

      `before`

      is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12


      import OpenAI from "openai";

      const openai = new OpenAI();


      async function main() {
        const messageFiles = await openai.beta.threads.messages.files.list(
          "thread_abc123",
          "msg_abc123"
        );
        console.log(messageFiles);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20


      {
        "object": "list",
        "data": [
          {
            "id": "file-abc123",
            "object": "thread.message.file",
            "created_at": 1699061776,
            "message_id": "msg_abc123"
          },
          {
            "id": "file-abc123",
            "object": "thread.message.file",
            "created_at": 1699061776,
            "message_id": "msg_abc123"
          }
        ],
        "first_id": "file-abc123",
        "last_id": "file-abc123",
        "has_more": false
      }

      ```


      get https://api.openai.com/v1/threads/{thread_id}/messages/{message_id}


      Retrieve a message.


      ### Path parameters


      The ID of the [thread](/docs/api-reference/threads) to which this message belongs.


      The ID of the message to retrieve.


      ### Returns


      The [message](/docs/api-reference/threads/messages/object) object matching the specified ID.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const message = await openai.beta.threads.messages.retrieve(
          "thread_abc123",
          "msg_abc123"
        );

        console.log(message);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20


      {
        "id": "msg_abc123",
        "object": "thread.message",
        "created_at": 1699017614,
        "thread_id": "thread_abc123",
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": {
              "value": "How does AI work? Explain it in simple terms.",
              "annotations": []
            }
          }
        ],
        "file_ids": [],
        "assistant_id": null,
        "run_id": null,
        "metadata": {}
      }

      ```


      get https://api.openai.com/v1/threads/{thread_id}/messages/{message_id}/files/{file_id}


      Retrieves a message file.


      ### Path parameters


      The ID of the thread to which the message and File belong.


      The ID of the message the file belongs to.


      The ID of the file being retrieved.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13


      import OpenAI from "openai";

      const openai = new OpenAI();


      async function main() {
        const messageFile = await openai.beta.threads.messages.files.retrieve(
          "thread_abc123",
          "msg_abc123",
          "file-abc123"
        );
        console.log(messageFile);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6


      {
        "id": "file-abc123",
        "object": "thread.message.file",
        "created_at": 1699061776,
        "message_id": "msg_abc123"
      }

      ```


      post https://api.openai.com/v1/threads/{thread_id}/messages/{message_id}


      Modifies a message.


      ### Path parameters


      The ID of the thread to which this message belongs.


      The ID of the message to modify.


      ### Request body


      Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const message = await openai.beta.threads.messages.update(
          "thread_abc123",
          "msg_abc123",
          {
            metadata: {
              modified: "true",
              user: "abc123",
            },
          }
        }'
      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23


      {
        "id": "msg_abc123",
        "object": "thread.message",
        "created_at": 1699017614,
        "thread_id": "thread_abc123",
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": {
              "value": "How does AI work? Explain it in simple terms.",
              "annotations": []
            }
          }
        ],
        "file_ids": [],
        "assistant_id": null,
        "run_id": null,
        "metadata": {
          "modified": "true",
          "user": "abc123"
        }
      }

      ```


      Represents a message within a [thread](/docs/api-reference/threads).


      The identifier, which can be referenced in API endpoints.


      The object type, which is always

      `thread.message`

      .


      The Unix timestamp (in seconds) for when the message was created.


      The [thread](/docs/api-reference/threads) ID that this message belongs to.


      The status of the message, which can be either

      `in_progress`

      ,

      `incomplete`

      , or

      `completed`

      .


      On an incomplete message, details about why the message is incomplete.


      The Unix timestamp (in seconds) for when the message was completed.


      The Unix timestamp (in seconds) for when the message was marked as incomplete.


      The entity that produced the message. One of

      `user`

      or

      `assistant`

      .


      The content of the message in array of text and/or images.


      If applicable, the ID of the [assistant](/docs/api-reference/assistants) that authored this message.


      If applicable, the ID of the [run](/docs/api-reference/runs) associated with the authoring of this message.


      A list of [file](/docs/api-reference/files) IDs that the assistant should use. Useful for tools like retrieval and code_interpreter that can access files. A maximum of 10 files can be attached to a message.


      Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20


      {
        "id": "msg_abc123",
        "object": "thread.message",
        "created_at": 1698983503,
        "thread_id": "thread_abc123",
        "role": "assistant",
        "content": [
          {
            "type": "text",
            "text": {
              "value": "Hi! How can I help you today?",
              "annotations": []
            }
          }
        ],
        "file_ids": [],
        "assistant_id": "asst_abc123",
        "run_id": "run_abc123",
        "metadata": {}
      }

      ```


      A list of files attached to a

      `message`

      .


      The identifier, which can be referenced in API endpoints.


      The object type, which is always

      `thread.message.file`

      .


      The Unix timestamp (in seconds) for when the message file was created.


      ```

      1

      2

      3

      4

      5

      6

      7


      {
        "id": "file-abc123",
        "object": "thread.message.file",
        "created_at": 1698107661,
        "message_id": "message_QLoItBbqwyAJEzlTy4y9kOMM",
        "file_id": "file-abc123"
      }

      ```


      Represents an execution run on a thread.


      Related guide: [Assistants](/docs/assistants/overview)


      post https://api.openai.com/v1/threads/{thread_id}/runs


      Create a run.


      ### Path parameters


      The ID of the thread to run.


      ### Request body


      The ID of the [assistant](/docs/api-reference/assistants) to use to execute this run.


      The ID of the [Model](/docs/api-reference/models) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.


      Overrides the [instructions](/docs/api-reference/assistants/createAssistant) of the assistant. This is useful for modifying the behavior on a per-run basis.


      Appends additional instructions at the end of the instructions for the run. This is useful for modifying the behavior on a per-run basis without overriding other instructions.


      Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.


      Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.


      If

      `true`

      , returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a

      `data: [DONE]`

      message.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const run = await openai.beta.threads.runs.create(
          "thread_abc123",
          { assistant_id: "asst_abc123" }
        );

        console.log(run);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26

      27


      {
        "id": "run_abc123",
        "object": "thread.run",
        "created_at": 1699063290,
        "assistant_id": "asst_abc123",
        "thread_id": "thread_abc123",
        "status": "queued",
        "started_at": 1699063290,
        "expires_at": null,
        "cancelled_at": null,
        "failed_at": null,
        "completed_at": 1699063291,
        "last_error": null,
        "model": "gpt-4",
        "instructions": null,
        "tools": [
          {
            "type": "code_interpreter"
          }
        ],
        "file_ids": [
          "file-abc123",
          "file-abc456"
        ],
        "metadata": {},
        "usage": null
      }

      ```


      post https://api.openai.com/v1/threads/runs


      Create a thread and run it in one request.


      ### Request body


      The ID of the [assistant](/docs/api-reference/assistants) to use to execute this run.


      The ID of the [Model](/docs/api-reference/models) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.


      Override the default system message of the assistant. This is useful for modifying the behavior on a per-run basis.


      Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.


      Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.


      If

      `true`

      , returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a

      `data: [DONE]`

      message.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const run = await openai.beta.threads.createAndRun({
          assistant_id: "asst_abc123",
          thread: {
            messages: [
              { role: "user", content: "Explain deep learning to a 5 year old." },
            ],
          },
        });

        console.log(run);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20


      {
        "id": "run_abc123",
        "object": "thread.run",
        "created_at": 1699076792,
        "assistant_id": "asst_abc123",
        "thread_id": "thread_abc123",
        "status": "queued",
        "started_at": null,
        "expires_at": 1699077392,
        "cancelled_at": null,
        "failed_at": null,
        "completed_at": null,
        "last_error": null,
        "model": "gpt-4",
        "instructions": "You are a helpful assistant.",
        "tools": [],
        "file_ids": [],
        "metadata": {},
        "usage": null
      }

      ```


      get https://api.openai.com/v1/threads/{thread_id}/runs


      Returns a list of runs belonging to a thread.


      ### Path parameters


      The ID of the thread the run belongs to.


      ### Query parameters


      A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.


      Sort order by the

      `created_at`

      timestamp of the objects.

      `asc`

      for ascending order and

      `desc`

      for descending order.


      A cursor for use in pagination.

      `after`

      is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.


      A cursor for use in pagination.

      `before`

      is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const runs = await openai.beta.threads.runs.list(
          "thread_abc123"
        );

        console.log(runs);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26

      27

      28

      29

      30

      31

      32

      33

      34

      35

      36

      37

      38

      39

      40

      41

      42

      43

      44

      45

      46

      47

      48

      49

      50

      51

      52

      53

      54

      55

      56

      57

      58

      59

      60

      61

      62

      63

      64

      65

      66

      67

      68

      69

      70


      {
        "object": "list",
        "data": [
          {
            "id": "run_abc123",
            "object": "thread.run",
            "created_at": 1699075072,
            "assistant_id": "asst_abc123",
            "thread_id": "thread_abc123",
            "status": "completed",
            "started_at": 1699075072,
            "expires_at": null,
            "cancelled_at": null,
            "failed_at": null,
            "completed_at": 1699075073,
            "last_error": null,
            "model": "gpt-3.5-turbo",
            "instructions": null,
            "tools": [
              {
                "type": "code_interpreter"
              }
            ],
            "file_ids": [
              "file-abc123",
              "file-abc456"
            ],
            "metadata": {},
            "usage": {
              "prompt_tokens": 123,
              "completion_tokens": 456,
              "total_tokens": 579
            }
          },
          {
            "id": "run_abc456",
            "object": "thread.run",
            "created_at": 1699063290,
            "assistant_id": "asst_abc123",
            "thread_id": "thread_abc123",
            "status": "completed",
            "started_at": 1699063290,
            "expires_at": null,
            "cancelled_at": null,
            "failed_at": null,
            "completed_at": 1699063291,
            "last_error": null,
            "model": "gpt-3.5-turbo",
            "instructions": null,
            "tools": [
              {
                "type": "code_interpreter"
              }
            ],
            "file_ids": [
              "file-abc123",
              "file-abc456"
            ],
            "metadata": {},
            "usage": {
              "prompt_tokens": 123,
              "completion_tokens": 456,
              "total_tokens": 579
            }
          }
        ],
        "first_id": "run_abc123",
        "last_id": "run_abc456",
        "has_more": false
      }

      ```


      get https://api.openai.com/v1/threads/{thread_id}/runs/{run_id}/steps


      Returns a list of run steps belonging to a run.


      ### Path parameters


      The ID of the thread the run and run steps belong to.


      The ID of the run the run steps belong to.


      ### Query parameters


      A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.


      Sort order by the

      `created_at`

      timestamp of the objects.

      `asc`

      for ascending order and

      `desc`

      for descending order.


      A cursor for use in pagination.

      `after`

      is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.


      A cursor for use in pagination.

      `before`

      is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.


      ### Returns


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12


      import OpenAI from "openai";

      const openai = new OpenAI();


      async function main() {
        const runStep = await openai.beta.threads.runs.steps.list(
          "thread_abc123",
          "run_abc123"
        );
        console.log(runStep);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26

      27

      28

      29

      30

      31

      32

      33

      34


      {
        "object": "list",
        "data": [
          {
            "id": "step_abc123",
            "object": "thread.run.step",
            "created_at": 1699063291,
            "run_id": "run_abc123",
            "assistant_id": "asst_abc123",
            "thread_id": "thread_abc123",
            "type": "message_creation",
            "status": "completed",
            "cancelled_at": null,
            "completed_at": 1699063291,
            "expired_at": null,
            "failed_at": null,
            "last_error": null,
            "step_details": {
              "type": "message_creation",
              "message_creation": {
                "message_id": "msg_abc123"
              }
            },
            "usage": {
              "prompt_tokens": 123,
              "completion_tokens": 456,
              "total_tokens": 579
            }
          }
        ],
        "first_id": "step_abc123",
        "last_id": "step_abc456",
        "has_more": false
      }

      ```


      get https://api.openai.com/v1/threads/{thread_id}/runs/{run_id}


      Retrieves a run.


      ### Path parameters


      The ID of the [thread](/docs/api-reference/threads) that was run.


      The ID of the run to retrieve.


      ### Returns


      The [run](/docs/api-reference/runs/object) object matching the specified ID.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const run = await openai.beta.threads.runs.retrieve(
          "thread_abc123",
          "run_abc123"
        );

        console.log(run);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26

      27

      28

      29

      30

      31


      {
        "id": "run_abc123",
        "object": "thread.run",
        "created_at": 1699075072,
        "assistant_id": "asst_abc123",
        "thread_id": "thread_abc123",
        "status": "completed",
        "started_at": 1699075072,
        "expires_at": null,
        "cancelled_at": null,
        "failed_at": null,
        "completed_at": 1699075073,
        "last_error": null,
        "model": "gpt-3.5-turbo",
        "instructions": null,
        "tools": [
          {
            "type": "code_interpreter"
          }
        ],
        "file_ids": [
          "file-abc123",
          "file-abc456"
        ],
        "metadata": {},
        "usage": {
          "prompt_tokens": 123,
          "completion_tokens": 456,
          "total_tokens": 579
        }
      }

      ```


      get https://api.openai.com/v1/threads/{thread_id}/runs/{run_id}/steps/{step_id}


      Retrieves a run step.


      ### Path parameters


      The ID of the thread to which the run and run step belongs.


      The ID of the run to which the run step belongs.


      The ID of the run step to retrieve.


      ### Returns


      The [run step](/docs/api-reference/runs/step-object) object matching the specified ID.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13


      import OpenAI from "openai";

      const openai = new OpenAI();


      async function main() {
        const runStep = await openai.beta.threads.runs.steps.retrieve(
          "thread_abc123",
          "run_abc123",
          "step_abc123"
        );
        console.log(runStep);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26


      {
        "id": "step_abc123",
        "object": "thread.run.step",
        "created_at": 1699063291,
        "run_id": "run_abc123",
        "assistant_id": "asst_abc123",
        "thread_id": "thread_abc123",
        "type": "message_creation",
        "status": "completed",
        "cancelled_at": null,
        "completed_at": 1699063291,
        "expired_at": null,
        "failed_at": null,
        "last_error": null,
        "step_details": {
          "type": "message_creation",
          "message_creation": {
            "message_id": "msg_abc123"
          }
        },
        "usage": {
          "prompt_tokens": 123,
          "completion_tokens": 456,
          "total_tokens": 579
        }
      }

      ```


      post https://api.openai.com/v1/threads/{thread_id}/runs/{run_id}


      Modifies a run.


      ### Path parameters


      The ID of the [thread](/docs/api-reference/threads) that was run.


      The ID of the run to modify.


      ### Request body


      Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.


      ### Returns


      The modified [run](/docs/api-reference/runs/object) object matching the specified ID.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const run = await openai.beta.threads.runs.update(
          "thread_abc123",
          "run_abc123",
          {
            metadata: {
              user_id: "user_abc123",
            },
          }
        );

        console.log(run);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26

      27

      28

      29

      30

      31

      32

      33


      {
        "id": "run_abc123",
        "object": "thread.run",
        "created_at": 1699075072,
        "assistant_id": "asst_abc123",
        "thread_id": "thread_abc123",
        "status": "completed",
        "started_at": 1699075072,
        "expires_at": null,
        "cancelled_at": null,
        "failed_at": null,
        "completed_at": 1699075073,
        "last_error": null,
        "model": "gpt-3.5-turbo",
        "instructions": null,
        "tools": [
          {
            "type": "code_interpreter"
          }
        ],
        "file_ids": [
          "file-abc123",
          "file-abc456"
        ],
        "metadata": {
          "user_id": "user_abc123"
        },
        "usage": {
          "prompt_tokens": 123,
          "completion_tokens": 456,
          "total_tokens": 579
        }
      }

      ```


      post https://api.openai.com/v1/threads/{thread_id}/runs/{run_id}/submit_tool_outputs


      When a run has the

      `status: "requires_action"`

      and

      `required_action.type`

      is

      `submit_tool_outputs`

      , this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.


      ### Path parameters


      ### Request body


      ### Returns


      The modified [run](/docs/api-reference/runs/object) object matching the specified ID.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const run = await openai.beta.threads.runs.submitToolOutputs(
          "thread_123",
          "run_123",
          {
            tool_outputs: [
              {
                tool_call_id: "call_001",
                output: "70 degrees and sunny.",
              },
            ],
          }
        );

        console.log(run);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26

      27

      28

      29

      30

      31

      32

      33

      34

      35

      36

      37

      38

      39

      40

      41

      42


      {
        "id": "run_123",
        "object": "thread.run",
        "created_at": 1699075592,
        "assistant_id": "asst_123",
        "thread_id": "thread_123",
        "status": "queued",
        "started_at": 1699075592,
        "expires_at": 1699076192,
        "cancelled_at": null,
        "failed_at": null,
        "completed_at": null,
        "last_error": null,
        "model": "gpt-4",
        "instructions": null,
        "tools": [
          {
            "type": "function",
            "function": {
              "name": "get_current_weather",
              "description": "Get the current weather in a given location",
              "parameters": {
                "type": "object",
                "properties": {
                  "location": {
                    "type": "string",
                    "description": "The city and state, e.g. San Francisco, CA"
                  },
                  "unit": {
                    "type": "string",
                    "enum": ["celsius", "fahrenheit"]
                  }
                },
                "required": ["location"]
              }
            }
          }
        ],
        "file_ids": [],
        "metadata": {},
        "usage": null
      }

      ```


      post https://api.openai.com/v1/threads/{thread_id}/runs/{run_id}/cancel


      Cancels a run that is

      `in_progress`

      .


      ### Path parameters


      The ID of the thread to which this run belongs.


      The ID of the run to cancel.


      ### Returns


      The modified [run](/docs/api-reference/runs/object) object matching the specified ID.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const run = await openai.beta.threads.runs.cancel(
          "thread_abc123",
          "run_abc123"
        );

        console.log(run);
      }


      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24


      {
        "id": "run_abc123",
        "object": "thread.run",
        "created_at": 1699076126,
        "assistant_id": "asst_abc123",
        "thread_id": "thread_abc123",
        "status": "cancelling",
        "started_at": 1699076126,
        "expires_at": 1699076726,
        "cancelled_at": null,
        "failed_at": null,
        "completed_at": null,
        "last_error": null,
        "model": "gpt-4",
        "instructions": "You summarize books.",
        "tools": [
          {
            "type": "retrieval"
          }
        ],
        "file_ids": [],
        "metadata": {},
        "usage": null
      }

      ```


      Represents an execution run on a [thread](/docs/api-reference/threads).


      The identifier, which can be referenced in API endpoints.


      The object type, which is always

      `thread.run`

      .


      The Unix timestamp (in seconds) for when the run was created.


      The ID of the [thread](/docs/api-reference/threads) that was executed on as a part of this run.


      The ID of the [assistant](/docs/api-reference/assistants) used for execution of this run.


      The status of the run, which can be either

      `queued`

      ,

      `in_progress`

      ,

      `requires_action`

      ,

      `cancelling`

      ,

      `cancelled`

      ,

      `failed`

      ,

      `completed`

      , or

      `expired`

      .


      Details on the action required to continue the run. Will be

      `null`

      if no action is required.


      The last error associated with this run. Will be

      `null`

      if there are no errors.


      The Unix timestamp (in seconds) for when the run will expire.


      The Unix timestamp (in seconds) for when the run was started.


      The Unix timestamp (in seconds) for when the run was cancelled.


      The Unix timestamp (in seconds) for when the run failed.


      The Unix timestamp (in seconds) for when the run was completed.


      The model that the [assistant](/docs/api-reference/assistants) used for this run.


      The instructions that the [assistant](/docs/api-reference/assistants) used for this run.


      The list of tools that the [assistant](/docs/api-reference/assistants) used for this run.


      Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.


      Usage statistics related to the run. This value will be

      `null`

      if the run is not in a terminal state (i.e.

      `in_progress`

      ,

      `queued`

      , etc.).


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24


      {
        "id": "run_abc123",
        "object": "thread.run",
        "created_at": 1698107661,
        "assistant_id": "asst_abc123",
        "thread_id": "thread_abc123",
        "status": "completed",
        "started_at": 1699073476,
        "expires_at": null,
        "cancelled_at": null,
        "failed_at": null,
        "completed_at": 1699073498,
        "last_error": null,
        "model": "gpt-4",
        "instructions": null,
        "tools": [{"type": "retrieval"}, {"type": "code_interpreter"}],
        "file_ids": [],
        "metadata": {},
        "usage": {
          "prompt_tokens": 123,
          "completion_tokens": 456,
          "total_tokens": 579
        }
      }

      ```


      Represents a step in execution of a run.


      The identifier of the run step, which can be referenced in API endpoints.


      The object type, which is always

      `thread.run.step`

      .


      The Unix timestamp (in seconds) for when the run step was created.


      The ID of the [assistant](/docs/api-reference/assistants) associated with the run step.


      The ID of the [thread](/docs/api-reference/threads) that was run.


      The ID of the [run](/docs/api-reference/runs) that this run step is a part of.


      The type of run step, which can be either

      `message_creation`

      or

      `tool_calls`

      .


      The status of the run step, which can be either

      `in_progress`

      ,

      `cancelled`

      ,

      `failed`

      ,

      `completed`

      , or

      `expired`

      .


      The details of the run step.


      The last error associated with this run step. Will be

      `null`

      if there are no errors.


      The Unix timestamp (in seconds) for when the run step expired. A step is considered expired if the parent run is expired.


      The Unix timestamp (in seconds) for when the run step was cancelled.


      The Unix timestamp (in seconds) for when the run step failed.


      The Unix timestamp (in seconds) for when the run step completed.


      Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.


      Usage statistics related to the run step. This value will be

      `null`

      while the run step's status is

      `in_progress`

      .


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26


      {
        "id": "step_abc123",
        "object": "thread.run.step",
        "created_at": 1699063291,
        "run_id": "run_abc123",
        "assistant_id": "asst_abc123",
        "thread_id": "thread_abc123",
        "type": "message_creation",
        "status": "completed",
        "cancelled_at": null,
        "completed_at": 1699063291,
        "expired_at": null,
        "failed_at": null,
        "last_error": null,
        "step_details": {
          "type": "message_creation",
          "message_creation": {
            "message_id": "msg_abc123"
          }
        },
        "usage": {
          "prompt_tokens": 123,
          "completion_tokens": 456,
          "total_tokens": 579
        }
      }

      ```


      Represents a message delta i.e. any changed fields on a message during streaming.


      The identifier of the message, which can be referenced in API endpoints.


      The object type, which is always

      `thread.message.delta`

      .


      The delta containing the fields that have changed on the Message.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13


      {
        "id": "msg_123",
        "object": "thread.message.delta",
        "delta": {
          "content": [
            {
              "index": 0,
              "type": "text",
              "text": { "value": "Hello", "annotations": [] }
            }
          ]
        }
      }

      ```


      Represents a run step delta i.e. any changed fields on a run step during streaming.


      The identifier of the run step, which can be referenced in API endpoints.


      The object type, which is always

      `thread.run.step.delta`

      .


      The delta containing the fields that have changed on the run step.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17


      {
        "id": "step_123",
        "object": "thread.run.step.delta",
        "delta": {
          "step_details": {
            "type": "tool_calls",
            "tool_calls": [
              {
                "index": 0,
                "id": "call_123",
                "type": "code_interpreter",
                "code_interpreter": { "input": "", "outputs": [] }
              }
            ]
          }
        }
      }

      ```


      Represents an event emitted when streaming a Run.


      Each event in a server-sent events stream has an

      `event`

      and

      `data`

      property:


      ```

      event: thread.created

      data: {"id": "thread_123", "object": "thread", ...}

      ```


      We emit events whenever a new object is created, transitions to a new state, or is being streamed in parts (deltas). For example, we emit

      `thread.run.created`

      when a new run is created,

      `thread.run.completed`

      when a run completes, and so on. When an Assistant chooses to create a message during a run, we emit a

      `thread.message.created event`

      , a

      `thread.message.in_progress`

      event, many

      `thread.message.delta`

      events, and finally a

      `thread.message.completed`

      event.


      We may add additional events over time, so we recommend handling unknown events gracefully in your code. See the [Assistants API quickstart](/docs/assistants/overview) to learn how to integrate the Assistants API with streaming.


      Occurs when a new [thread](/docs/api-reference/threads/object) is created.


      Occurs when a new [run](/docs/api-reference/runs/object) is created.


      Occurs when a [run](/docs/api-reference/runs/object) moves to a

      `queued`

      status.


      Occurs when a [run](/docs/api-reference/runs/object) moves to an

      `in_progress`

      status.


      Occurs when a [run](/docs/api-reference/runs/object) moves to a

      `requires_action`

      status.


      Occurs when a [run](/docs/api-reference/runs/object) is completed.


      Occurs when a [run](/docs/api-reference/runs/object) moves to a

      `cancelling`

      status.


      Occurs when a [run](/docs/api-reference/runs/object) is cancelled.


      Occurs when a [run](/docs/api-reference/runs/object) expires.


      Occurs when a [run step](/docs/api-reference/runs/step-object) moves to an

      `in_progress`

      state.


      Occurs when parts of a [run step](/docs/api-reference/runs/step-object) are being streamed.


      Occurs when a [message](/docs/api-reference/messages/object) moves to an

      `in_progress`

      state.


      Occurs when parts of a [Message](/docs/api-reference/messages/object) are being streamed.


      Occurs when a [message](/docs/api-reference/messages/object) is completed.


      Occurs when a [message](/docs/api-reference/messages/object) ends before it is completed.


      Occurs when an [error](/docs/guides/error-codes/api-errors) occurs. This can happen due to an internal server error or a timeout.


      Occurs when a stream ends.


      Given a prompt, the model will return one or more predicted completions along with the probabilities of alternative tokens at each position. Most developer should use our [Chat Completions API](/docs/guides/text-generation/text-generation-models) to leverage our best and newest models.


      post https://api.openai.com/v1/completions


      Creates a completion for the provided prompt and parameters.


      ### Request body


      ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.


      The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.


      Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document.


      Generates

      `best_of`

      completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed.


      When used with

      `n`

      ,

      `best_of`

      controls the number of candidate completions and

      `n`

      specifies how many to return –

      `best_of`

      must be greater than

      `n`

      .


      **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for

      `max_tokens`

      and

      `stop`

      .


      Echo back the prompt in addition to the completion


      Modify the likelihood of specified tokens appearing in the completion.


      Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.


      As an example, you can pass

      `{"50256": -100}`

      to prevent the <|endoftext|> token from being generated.


      Include the log probabilities on the

      `logprobs`

      most likely output tokens, as well the chosen tokens. For example, if

      `logprobs`

      is 5, the API will return a list of the 5 most likely tokens. The API will always return the

      `logprob`

      of the sampled token, so there may be up to

      `logprobs+1`

      elements in the response.


      The maximum value for

      `logprobs`

      is 5.


      The maximum number of [tokens](/tokenizer) that can be generated in the completion.


      The token count of your prompt plus

      `max_tokens`

      cannot exceed the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.


      How many completions to generate for each prompt.


      **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for

      `max_tokens`

      and

      `stop`

      .


      If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same

      `seed`

      and parameters should return the same result.


      Determinism is not guaranteed, and you should refer to the

      `system_fingerprint`

      response parameter to monitor changes in the backend.


      Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.


      Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a

      `data: [DONE]`

      message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).


      The suffix that comes after a completion of inserted text.


      This parameter is only supported for

      `gpt-3.5-turbo-instruct`

      .


      What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.


      We generally recommend altering this or

      `top_p`

      but not both.


      An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.


      We generally recommend altering this or

      `temperature`

      but not both.


      A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).


      ### Returns


      Returns a [completion](/docs/api-reference/completions/object) object, or a sequence of completion objects if the request is streamed.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15


      import OpenAI from "openai";


      const openai = new OpenAI();


      async function main() {
        const completion = await openai.completions.create({
          model: "gpt-3.5-turbo-instruct",
          prompt: "Say this is a test.",
          max_tokens: 7,
          temperature: 0,
        });

        console.log(completion);
      }

      main();

      ```


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20


      {
        "id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
        "object": "text_completion",
        "created": 1589478378,
        "model": "gpt-3.5-turbo-instruct",
        "system_fingerprint": "fp_44709d6fcb",
        "choices": [
          {
            "text": "\n\nThis is indeed a test",
            "index": 0,
            "logprobs": null,
            "finish_reason": "length"
          }
        ],
        "usage": {
          "prompt_tokens": 5,
          "completion_tokens": 7,
          "total_tokens": 12
        }
      }

      ```


      Represents a completion response from the API. Note: both the streamed and non-streamed response objects share the same shape (unlike the chat endpoint).


      A unique identifier for the completion.


      The list of completion choices the model generated for the input prompt.


      The Unix timestamp (in seconds) of when the completion was created.


      The model used for completion.


      This fingerprint represents the backend configuration that the model runs with.


      Can be used in conjunction with the

      `seed`

      request parameter to understand when backend changes have been made that might impact determinism.


      The object type, which is always "text_completion"


      Usage statistics for the completion request.


      ```

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19


      {
        "id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
        "object": "text_completion",
        "created": 1589478378,
        "model": "gpt-3.5-turbo",
        "choices": [
          {
            "text": "\n\nThis is indeed a test",
            "index": 0,
            "logprobs": null,
            "finish_reason": "length"
          }
        ],
        "usage": {
          "prompt_tokens": 5,
          "completion_tokens": 7,
          "total_tokens": 12
        }
      }

      ```


      https://gist.githubusercontent.com/tarasglek/79339a5ff264fbc5cfb98608bc600461/raw/69906fc5cfaea58ce14bd58635ee224da5b866df/Assistants-Api.html
    imageUrls: []
    user:
      username: tarasglek
      name: Taras Glek
      avatarUrl: https://avatars.githubusercontent.com/u/857083?v=4
  - id: vZtJNHLXWcRlpsjWrMqi7
    date: 2024-03-29T11:27:21.010Z
    type: human
    text: Above is assistant api documentation. write me a diagram of all the
      methods/classes i need to use to implement an assistant..eg i need to
      create an assistant, create threads, upload files to both assistant and
      thrreads. write some nice bullet points to help understand this
    imageUrls: []
    user:
      username: tarasglek
      name: Taras Glek
      avatarUrl: https://avatars.githubusercontent.com/u/857083?v=4
  - id: X-kl66gly1aXcYQFbyEwg
    date: 2024-03-29T11:27:21.550Z
    type: ai
    text: >-
      Sure, here's a diagram and some bullet points to help you understand the
      process of implementing an assistant using the OpenAI API:


      ```mermaid

      graph TD
          A[Create Assistant] --> B[Attach Files to Assistant]
          A --> C[Create Thread]
          C --> D[Create Messages in Thread]
          D --> E[Attach Files to Messages]
          C --> F[Create Run]
          F --> G[Submit Tool Outputs]
          F --> H[Cancel Run]
          F --> I[Update Run]
          F --> J[List Run Steps]
          F --> K[Retrieve Run Step]
          F --> L[Retrieve Run]
          F --> M[List Runs]
      ```


      Bullet Points:


      - **Create Assistant**: Use the `POST /v1/assistants` endpoint to create a new assistant. You'll need to provide a model, name, instructions, and tools.

      - **Attach Files to Assistant**: Use the `POST /v1/assistants/{assistant_id}/files` endpoint to attach files to the assistant. These files can be used by tools like `retrieval` and `code_interpreter`.

      - **Create Thread**: Use the `POST /v1/threads` endpoint to create a new thread. Threads are used to manage conversations and runs.

      - **Create Messages in Thread**: Use the `POST /v1/threads/{thread_id}/messages` endpoint to create new messages in a thread. These messages can be from the user or the assistant.

      - **Attach Files to Messages**: Use the `POST /v1/threads/{thread_id}/messages/{message_id}/files` endpoint to attach files to a specific message. These files can be used by the assistant during the run.

      - **Create Run**: Use the `POST /v1/threads/{thread_id}/runs` endpoint to create a new run on a thread. A run is an execution of the assistant on the thread.

      - **Submit Tool Outputs**: If a run requires action (e.g., submitting outputs from tool calls), use the `POST /v1/threads/{thread_id}/runs/{run_id}/submit_tool_outputs` endpoint to submit the outputs.

      - **Cancel Run**: Use the `POST /v1/threads/{thread_id}/runs/{run_id}/cancel` endpoint to cancel a run that is in progress.

      - **Update Run**: Use the `POST /v1/threads/{thread_id}/runs/{run_id}` endpoint to update the metadata of a run.

      - **List Run Steps**: Use the `GET /v1/threads/{thread_id}/runs/{run_id}/steps` endpoint to list the steps of a run.

      - **Retrieve Run Step**: Use the `GET /v1/threads/{thread_id}/runs/{run_id}/steps/{step_id}` endpoint to retrieve a specific run step.

      - **Retrieve Run**: Use the `GET /v1/threads/{thread_id}/runs/{run_id}` endpoint to retrieve a specific run.

      - **List Runs**: Use the `GET /v1/threads/{thread_id}/runs` endpoint to list all runs for a thread.


      This diagram and bullet points cover the main endpoints and flow for creating and interacting with assistants, threads, messages, files, runs, and run steps. Remember to refer to the official documentation for detailed information on request/response formats, parameters, and examples.
    imageUrls: []
    model: anthropic/claude-3-sonnet:beta
    versions:
      - id: Lnjv_UJI-s0rrLvN64vm4
        date: 2024-03-29T11:27:55.265Z
        model: anthropic/claude-3-sonnet:beta
        text: >-
          Sure, here's a diagram and some bullet points to help you understand
          the process of implementing an assistant using the OpenAI API:


          ```mermaid

          graph TD
              A[Create Assistant] --> B[Attach Files to Assistant]
              A --> C[Create Thread]
              C --> D[Create Messages in Thread]
              D --> E[Attach Files to Messages]
              C --> F[Create Run]
              F --> G[Submit Tool Outputs]
              F --> H[Cancel Run]
              F --> I[Update Run]
              F --> J[List Run Steps]
              F --> K[Retrieve Run Step]
              F --> L[Retrieve Run]
              F --> M[List Runs]
          ```


          Bullet Points:


          - **Create Assistant**: Use the `POST /v1/assistants` endpoint to create a new assistant. You'll need to provide a model, name, instructions, and tools.

          - **Attach Files to Assistant**: Use the `POST /v1/assistants/{assistant_id}/files` endpoint to attach files to the assistant. These files can be used by tools like `retrieval` and `code_interpreter`.

          - **Create Thread**: Use the `POST /v1/threads` endpoint to create a new thread. Threads are used to manage conversations and runs.

          - **Create Messages in Thread**: Use the `POST /v1/threads/{thread_id}/messages` endpoint to create new messages in a thread. These messages can be from the user or the assistant.

          - **Attach Files to Messages**: Use the `POST /v1/threads/{thread_id}/messages/{message_id}/files` endpoint to attach files to a specific message. These files can be used by the assistant during the run.

          - **Create Run**: Use the `POST /v1/threads/{thread_id}/runs` endpoint to create a new run on a thread. A run is an execution of the assistant on the thread.

          - **Submit Tool Outputs**: If a run requires action (e.g., submitting outputs from tool calls), use the `POST /v1/threads/{thread_id}/runs/{run_id}/submit_tool_outputs` endpoint to submit the outputs.

          - **Cancel Run**: Use the `POST /v1/threads/{thread_id}/runs/{run_id}/cancel` endpoint to cancel a run that is in progress.

          - **Update Run**: Use the `POST /v1/threads/{thread_id}/runs/{run_id}` endpoint to update the metadata of a run.

          - **List Run Steps**: Use the `GET /v1/threads/{thread_id}/runs/{run_id}/steps` endpoint to list the steps of a run.

          - **Retrieve Run Step**: Use the `GET /v1/threads/{thread_id}/runs/{run_id}/steps/{step_id}` endpoint to retrieve a specific run step.

          - **Retrieve Run**: Use the `GET /v1/threads/{thread_id}/runs/{run_id}` endpoint to retrieve a specific run.

          - **List Runs**: Use the `GET /v1/threads/{thread_id}/runs` endpoint to list all runs for a thread.


          This diagram and bullet points cover the main endpoints and flow for creating and interacting with assistants, threads, messages, files, runs, and run steps. Remember to refer to the official documentation for detailed information on request/response formats, parameters, and examples.
  - id: qp1yzmWgjEV-uuNXZJFZp
    date: 2024-03-29T11:28:05.896Z
    type: human
    text: now show me the official typescript api to use for this
    imageUrls: []
    user:
      username: tarasglek
      name: Taras Glek
      avatarUrl: https://avatars.githubusercontent.com/u/857083?v=4
  - id: SoXomt9hHXUAIXD4F8kzp
    date: 2024-03-29T11:28:52.099Z
    type: human
    text: eg repeat above messabe bue instead of http api show me typescript apis
      that wrap them
    imageUrls: []
    user:
      username: tarasglek
      name: Taras Glek
      avatarUrl: https://avatars.githubusercontent.com/u/857083?v=4
  - id: v6dCNoV5IfeqMw2ZSBEl0
    date: 2024-03-29T11:28:52.639Z
    type: ai
    text: >-
      Sure, here's an example of how you might use the official OpenAI Node.js
      library to implement the various functionalities mentioned earlier:


      ```typescript

      import { OpenAI } from "openai";


      const openai = new OpenAI();


      // Create Assistant

      const createAssistant = async () => {
        const assistant = await openai.beta.assistants.create({
          model: "gpt-4",
          name: "My Assistant",
          instructions: "You are a helpful assistant.",
          tools: [{ type: "code_interpreter" }],
        });
        console.log(assistant);
      };


      // Attach Files to Assistant

      const attachFileToAssistant = async (assistantId: string, fileId: string) => {
        const assistantFile = await openai.beta.assistants.files.create(assistantId, {
          file_id: fileId,
        });
        console.log(assistantFile);
      };


      // Create Thread

      const createThread = async () => {
        const thread = await openai.beta.threads.create();
        console.log(thread);
      };


      // Create Message in Thread

      const createMessage = async (threadId: string, content: string) => {
        const message = await openai.beta.threads.messages.create(threadId, {
          role: "user",
          content,
        });
        console.log(message);
      };


      // Attach File to Message

      const attachFileToMessage = async (
        threadId: string,
        messageId: string,
        fileId: string
      ) => {
        const messageFile = await openai.beta.threads.messages.files.create(
          threadId,
          messageId,
          { file_id: fileId }
        );
        console.log(messageFile);
      };


      // Create Run

      const createRun = async (threadId: string, assistantId: string) => {
        const run = await openai.beta.threads.runs.create(threadId, {
          assistant_id: assistantId,
        });
        console.log(run);
      };


      // Submit Tool Outputs

      const submitToolOutputs = async (
        threadId: string,
        runId: string,
        toolOutputs: { tool_call_id: string; output: any }[]
      ) => {
        const run = await openai.beta.threads.runs.submitToolOutputs(
          threadId,
          runId,
          { tool_outputs: toolOutputs }
        );
        console.log(run);
      };


      // Cancel Run

      const cancelRun = async (threadId: string, runId: string) => {
        const run = await openai.beta.threads.runs.cancel(threadId, runId);
        console.log(run);
      };


      // Update Run

      const updateRun = async (
        threadId: string,
        runId: string,
        metadata: Record<string, any>
      ) => {
        const run = await openai.beta.threads.runs.update(threadId, runId, {
          metadata,
        });
        console.log(run);
      };


      // List Run Steps

      const listRunSteps = async (threadId: string, runId: string) => {
        const runSteps = await openai.beta.threads.runs.steps.list(threadId, runId);
        console.log(runSteps.data);
      };


      // Retrieve Run Step

      const retrieveRunStep = async (
        threadId: string,
        runId: string,
        stepId: string
      ) => {
        const runStep = await openai.beta.threads.runs.steps.retrieve(
          threadId,
          runId,
          stepId
        );
        console.log(runStep);
      };


      // Retrieve Run

      const retrieveRun = async (threadId: string, runId: string) => {
        const run = await openai.beta.threads.runs.retrieve(threadId, runId);
        console.log(run);
      };


      // List Runs

      const listRuns = async (threadId: string) => {
        const runs = await openai.beta.threads.runs.list(threadId);
        console.log(runs.data);
      };

      ```


      This example demonstrates how to use the various methods provided by the OpenAI Node.js library to perform the actions mentioned earlier, such as creating assistants, threads, messages, runs, and interacting with files and run steps.


      Note that you'll need to import the necessary types and interfaces from the `openai` package, and you may need to handle errors and other edge cases in your actual implementation.
    imageUrls: []
    model: anthropic/claude-3-sonnet:beta
    versions:
      - id: DsT1-6BXx8ZXkUlkKyea7
        date: 2024-03-29T11:29:32.817Z
        model: anthropic/claude-3-sonnet:beta
        text: >-
          Sure, here's an example of how you might use the official OpenAI
          Node.js library to implement the various functionalities mentioned
          earlier:


          ```typescript

          import { OpenAI } from "openai";


          const openai = new OpenAI();


          // Create Assistant

          const createAssistant = async () => {
            const assistant = await openai.beta.assistants.create({
              model: "gpt-4",
              name: "My Assistant",
              instructions: "You are a helpful assistant.",
              tools: [{ type: "code_interpreter" }],
            });
            console.log(assistant);
          };


          // Attach Files to Assistant

          const attachFileToAssistant = async (assistantId: string, fileId: string) => {
            const assistantFile = await openai.beta.assistants.files.create(assistantId, {
              file_id: fileId,
            });
            console.log(assistantFile);
          };


          // Create Thread

          const createThread = async () => {
            const thread = await openai.beta.threads.create();
            console.log(thread);
          };


          // Create Message in Thread

          const createMessage = async (threadId: string, content: string) => {
            const message = await openai.beta.threads.messages.create(threadId, {
              role: "user",
              content,
            });
            console.log(message);
          };


          // Attach File to Message

          const attachFileToMessage = async (
            threadId: string,
            messageId: string,
            fileId: string
          ) => {
            const messageFile = await openai.beta.threads.messages.files.create(
              threadId,
              messageId,
              { file_id: fileId }
            );
            console.log(messageFile);
          };


          // Create Run

          const createRun = async (threadId: string, assistantId: string) => {
            const run = await openai.beta.threads.runs.create(threadId, {
              assistant_id: assistantId,
            });
            console.log(run);
          };


          // Submit Tool Outputs

          const submitToolOutputs = async (
            threadId: string,
            runId: string,
            toolOutputs: { tool_call_id: string; output: any }[]
          ) => {
            const run = await openai.beta.threads.runs.submitToolOutputs(
              threadId,
              runId,
              { tool_outputs: toolOutputs }
            );
            console.log(run);
          };


          // Cancel Run

          const cancelRun = async (threadId: string, runId: string) => {
            const run = await openai.beta.threads.runs.cancel(threadId, runId);
            console.log(run);
          };


          // Update Run

          const updateRun = async (
            threadId: string,
            runId: string,
            metadata: Record<string, any>
          ) => {
            const run = await openai.beta.threads.runs.update(threadId, runId, {
              metadata,
            });
            console.log(run);
          };


          // List Run Steps

          const listRunSteps = async (threadId: string, runId: string) => {
            const runSteps = await openai.beta.threads.runs.steps.list(threadId, runId);
            console.log(runSteps.data);
          };


          // Retrieve Run Step

          const retrieveRunStep = async (
            threadId: string,
            runId: string,
            stepId: string
          ) => {
            const runStep = await openai.beta.threads.runs.steps.retrieve(
              threadId,
              runId,
              stepId
            );
            console.log(runStep);
          };


          // Retrieve Run

          const retrieveRun = async (threadId: string, runId: string) => {
            const run = await openai.beta.threads.runs.retrieve(threadId, runId);
            console.log(run);
          };


          // List Runs

          const listRuns = async (threadId: string) => {
            const runs = await openai.beta.threads.runs.list(threadId);
            console.log(runs.data);
          };

          ```


          This example demonstrates how to use the various methods provided by the OpenAI Node.js library to perform the actions mentioned earlier, such as creating assistants, threads, messages, runs, and interacting with files and run steps.


          Note that you'll need to import the necessary types and interfaces from the `openai` package, and you may need to handle errors and other edge cases in your actual implementation.
  - id: qWIzYyc495bqRe22Qc77N
    date: 2024-03-29T11:33:25.243Z
    type: human
    text: that's not complete show me complete way to create a assistant and have a
      convo witjh it..end to end
    imageUrls: []
    user:
      username: tarasglek
      name: Taras Glek
      avatarUrl: https://avatars.githubusercontent.com/u/857083?v=4
  - id: 8FvkAJvJMezaoVnCPrCo9
    date: 2024-03-30T08:57:18.484Z
    type: ai
    text: >2-
       Sure, here's a more complete example of how you might use the OpenAI Node.js library to create an assistant and have a conversation with it:

      ```typescript

      import { OpenAI } from "openai";


      const openai = new OpenAI();


      // Create Assistant

      const createAssistant = async () => {
        const assistant = await openai.beta.assistants.create({
          model: "gpt-4",
          name: "My Assistant",
          instructions: "You are a helpful assistant. When asked a question, write and run Python code to answer it.",
          tools: [{ type: "code_interpreter" }],
        });
        console.log(assistant);
        return assistant;
      };


      // Create Thread

      const createThread = async (assistant: any) => {
        const thread = await openai.beta.threads.create();
        console.log(thread);
        return thread;
      };


      // Create Message in Thread

      const createMessage = async (threadId: string, content: string) => {
        const message = await openai.beta.threads.messages.create(threadId, {
          role: "user",
          content,
        });
        console.log(message);
        return message;
      };


      // Create Run

      const createRun = async (threadId: string, assistantId: string) => {
        const run = await openai.beta.threads.runs.create(threadId, {
          assistant_id: assistantId,
        });
        console.log(run);
        return run;
      };


      // Submit Tool Outputs

      const submitToolOutputs = async (
        threadId: string,
        runId: string,
        toolOutputs: { tool_call_id: string; output: any }[]
      ) => {
        const run = await openai.beta.threads.runs.submitToolOutputs(
          threadId,
          runId,
          { tool_outputs: toolOutputs }
        );
        console.log(run);
      };


      // Cancel Run

      const cancelRun = async (threadId: string, runId: string) => {
        const run = await openai.beta.threads.runs.cancel(threadId, runId);
        console.log(run);
      };


      // Update Run

      const updateRun = async (
        threadId: string,
        runId: string,
        metadata: Record<string, any>
      ) => {
        const run = await openai.beta.threads.runs.update(threadId, runId, {
          metadata,
        });
        console.log(run);
      };


      // List Run Steps

      const listRunSteps = async (threadId: string, runId: string) => {
        const runSteps = await openai.beta.threads.runs.steps.list(threadId, runId);
        console.log(runSteps.data);
      };


      // Retrieve Run Step

      const retrieveRunStep = async (
        threadId: string,
        runId: string,
        stepId: string
      ) => {
        const runStep = await openai.beta.threads.runs.steps.retrieve(
          threadId,
          runId,
          stepId
        );
        console.log(runStep);
      };


      // Retrieve Run

      const retrieveRun = async (threadId: string, runId: string) => {
        const run = await openai.beta.threads.runs.retrieve(threadId, runId);
        console.log(run);
      };


      // List Runs

      const listRuns = async (threadId: string) => {
        const runs = await openai.beta.threads.runs.list(threadId);
        console.log(runs.data);
      };


      (async () => {
        try {
          // Create an assistant
          const assistant = await createAssistant();

          // Create a thread
          const thread = await createThread(assistant.id);

          // Create a message in the thread
          const message = await createMessage(thread.id, "How does AI work? Explain it in simple terms.");

          // Create a run for the assistant to answer the question
          const run = await createRun(thread.id, assistant.id);

          // Wait for the assistant to complete the run
          while (run.status!== "completed") {
            await new Promise((resolve) => setTimeout(resolve, 1000));
            run = await retrieveRun(thread.id, run.id);
          }

          // Retrieve the assistant's response
          const response = await retrieveRunStep(thread.id, run.id, run.choices[0].index);

          console.log(`Assistant: ${response.choices[0].text}`);

          // Clean up by cancelling the run
          await cancelRun(thread.id, run.id);
        } catch (error) {
          console.error(error);
        }
      })();

      ```


      This example demonstrates how to create an assistant, create a thread, create a message in the thread, create a run for the assistant to answer the question, wait for the assistant to complete the run, retrieve the assistant's response, and clean up by cancelling the run.


      Note that you'll need to handle errors and other edge cases in your actual implementation. Also, you may need to adjust the code to fit your specific use case.
    imageUrls: []
    model: databricks/dbrx-instruct
    versions:
      - id: pUWTpLEjCkmoYoCUGfzx3
        date: 2024-03-30T08:57:18.484Z
        model: databricks/dbrx-instruct
        text: >2-
           Sure, here's a more complete example of how you might use the OpenAI Node.js library to create an assistant and have a conversation with it:

          ```typescript

          import { OpenAI } from "openai";


          const openai = new OpenAI();


          // Create Assistant

          const createAssistant = async () => {
            const assistant = await openai.beta.assistants.create({
              model: "gpt-4",
              name: "My Assistant",
              instructions: "You are a helpful assistant. When asked a question, write and run Python code to answer it.",
              tools: [{ type: "code_interpreter" }],
            });
            console.log(assistant);
            return assistant;
          };


          // Create Thread

          const createThread = async (assistant: any) => {
            const thread = await openai.beta.threads.create();
            console.log(thread);
            return thread;
          };


          // Create Message in Thread

          const createMessage = async (threadId: string, content: string) => {
            const message = await openai.beta.threads.messages.create(threadId, {
              role: "user",
              content,
            });
            console.log(message);
            return message;
          };


          // Create Run

          const createRun = async (threadId: string, assistantId: string) => {
            const run = await openai.beta.threads.runs.create(threadId, {
              assistant_id: assistantId,
            });
            console.log(run);
            return run;
          };


          // Submit Tool Outputs

          const submitToolOutputs = async (
            threadId: string,
            runId: string,
            toolOutputs: { tool_call_id: string; output: any }[]
          ) => {
            const run = await openai.beta.threads.runs.submitToolOutputs(
              threadId,
              runId,
              { tool_outputs: toolOutputs }
            );
            console.log(run);
          };


          // Cancel Run

          const cancelRun = async (threadId: string, runId: string) => {
            const run = await openai.beta.threads.runs.cancel(threadId, runId);
            console.log(run);
          };


          // Update Run

          const updateRun = async (
            threadId: string,
            runId: string,
            metadata: Record<string, any>
          ) => {
            const run = await openai.beta.threads.runs.update(threadId, runId, {
              metadata,
            });
            console.log(run);
          };


          // List Run Steps

          const listRunSteps = async (threadId: string, runId: string) => {
            const runSteps = await openai.beta.threads.runs.steps.list(threadId, runId);
            console.log(runSteps.data);
          };


          // Retrieve Run Step

          const retrieveRunStep = async (
            threadId: string,
            runId: string,
            stepId: string
          ) => {
            const runStep = await openai.beta.threads.runs.steps.retrieve(
              threadId,
              runId,
              stepId
            );
            console.log(runStep);
          };


          // Retrieve Run

          const retrieveRun = async (threadId: string, runId: string) => {
            const run = await openai.beta.threads.runs.retrieve(threadId, runId);
            console.log(run);
          };


          // List Runs

          const listRuns = async (threadId: string) => {
            const runs = await openai.beta.threads.runs.list(threadId);
            console.log(runs.data);
          };


          (async () => {
            try {
              // Create an assistant
              const assistant = await createAssistant();

              // Create a thread
              const thread = await createThread(assistant.id);

              // Create a message in the thread
              const message = await createMessage(thread.id, "How does AI work? Explain it in simple terms.");

              // Create a run for the assistant to answer the question
              const run = await createRun(thread.id, assistant.id);

              // Wait for the assistant to complete the run
              while (run.status!== "completed") {
                await new Promise((resolve) => setTimeout(resolve, 1000));
                run = await retrieveRun(thread.id, run.id);
              }

              // Retrieve the assistant's response
              const response = await retrieveRunStep(thread.id, run.id, run.choices[0].index);

              console.log(`Assistant: ${response.choices[0].text}`);

              // Clean up by cancelling the run
              await cancelRun(thread.id, run.id);
            } catch (error) {
              console.error(error);
            }
          })();

          ```


          This example demonstrates how to create an assistant, create a thread, create a message in the thread, create a run for the assistant to answer the question, wait for the assistant to complete the run, retrieve the assistant's response, and clean up by cancelling the run.


          Note that you'll need to handle errors and other edge cases in your actual implementation. Also, you may need to adjust the code to fit your specific use case.
      - id: jnYzPy5NCQA4npFzlN-HL
        date: 2024-03-29T11:34:05.829Z
        model: anthropic/claude-3-sonnet:beta
        text: >-
          Sure, here's an example of how you can create an assistant and have a
          conversation with it using the OpenAI Node.js library:


          ```typescript

          import { OpenAI, CreateAssistantRequest, CreateRunRequest, CreateThreadRequest, CreateMessageRequest } from "openai";


          const openai = new OpenAI();


          // Create an assistant

          const createAssistant = async () => {
            const assistantRequest: CreateAssistantRequest = {
              model: "gpt-4",
              name: "My Assistant",
              instructions: "You are a helpful assistant.",
              tools: [{ type: "code_interpreter" }],
            };

            const assistant = await openai.beta.assistants.create(assistantRequest);
            console.log("Assistant created:", assistant);
            return assistant.id;
          };


          // Create a thread

          const createThread = async () => {
            const threadRequest: CreateThreadRequest = {};
            const thread = await openai.beta.threads.create(threadRequest);
            console.log("Thread created:", thread);
            return thread.id;
          };


          // Create a message in the thread

          const createMessage = async (threadId: string, content: string) => {
            const messageRequest: CreateMessageRequest = {
              role: "user",
              content,
            };
            const message = await openai.beta.threads.messages.create(threadId, messageRequest);
            console.log("Message created:", message);
          };


          // Create a run on the thread

          const createRun = async (threadId: string, assistantId: string) => {
            const runRequest: CreateRunRequest = {
              assistant_id: assistantId,
            };
            const run = await openai.beta.threads.runs.create(threadId, runRequest);
            console.log("Run created:", run);
            return run.id;
          };


          // Start a conversation

          const startConversation = async () => {
            const assistantId = await createAssistant();
            const threadId = await createThread();

            // Send a message from the user
            await createMessage(threadId, "Hello, how are you?");

            // Create a run to get the assistant's response
            const runId = await createRun(threadId, assistantId);

            // Stream the run events
            const stream = await openai.beta.threads.runs.stream(threadId, runId);
            stream.on("data", (data) => {
              const event = data.toString();
              if (event.includes("thread.message.created")) {
                const message = JSON.parse(event.split("data: ")[1]);
                console.log("Assistant:", message.content[0].text.value);
              }
            });
          };


          startConversation();

          ```


          Here's what's happening in this example:


          1. The `createAssistant` function creates a new assistant with the provided configuration (model, name, instructions, and tools).

          2. The `createThread` function creates a new thread to hold the conversation.

          3. The `createMessage` function creates a new message in the thread with the provided content and the "user" role.

          4. The `createRun` function creates a new run on the thread using the specified assistant.

          5. The `startConversation` function orchestrates the entire process:
             - It creates an assistant and a thread.
             - It sends a message from the user to the thread.
             - It creates a run on the thread using the assistant.
             - It streams the run events and logs the assistant's response when a `thread.message.created` event is received.

          Note that this example uses the `stream` method to stream the run events and log the assistant's response as it's generated. You can also use the `retrieve` method to get the complete response after the run is finished.


          Additionally, you'll need to handle errors, add error handling, and potentially add more functionality based on your specific requirements.