FG
💻 Software🤖 AI & LLMsOpenAI

Examples for streaming tools calls need fixing

Fresh3 days ago
Mar 14, 20260 views
Confidence Score54%
54%

Problem

Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug I have used the `messageReducer` from the examples. After streaming the chunks finished, and I had the assembled message, I saw that the `arguments` property was always empty. I am testing the new GPT-4 model which supports multiple function calls. After hours of debugging, I finally realized that the problem was the reducer function, which didn't support arrays. This is how my reducer function currently looks, but I am not sure if that is correct. It would never add additional items from the delta: [code block] To Reproduce Use the new `gpt-4-1106-preview` model which returns a `tool_calls` array in combination with streaming, while accumulating the message using the reducer function from the examples. Code snippets _No response_ OS macOS Node version Node v20.7.0 Library version openai 4.17.4

Unverified for your environment

Select your OS to check compatibility.

1 Fix

Canonical Fix
Unverified Fix
New Fix – Awaiting Verification

Fix messageReducer to Support Tool Calls Array

Medium Risk

The current implementation of the messageReducer function does not handle arrays properly, which leads to the 'arguments' property being empty when multiple function calls are returned from the GPT-4 model. The reducer needs to be modified to correctly accumulate items from the 'tool_calls' array in the streaming response.

Awaiting Verification

Be the first to verify this fix

  1. 1

    Update messageReducer Function

    Modify the messageReducer function to handle arrays correctly. Ensure that it checks if the incoming delta contains an array and appropriately concatenates the items to the existing message.

    javascript
    function messageReducer(accumulator, delta) {
      if (Array.isArray(delta.tool_calls)) {
        delta.tool_calls.forEach(call => {
          accumulator.arguments.push(call);
        });
      }
      return accumulator;
    }
  2. 2

    Test with Streaming Response

    Run a test with the updated messageReducer using the gpt-4-1106-preview model. Ensure that the reducer correctly accumulates all tool calls from the streaming response.

    javascript
    const response = await openai.chat.completions.create({
      model: 'gpt-4-1106-preview',
      stream: true,
      messages: [...],
    });
    
    let accumulatedMessage = response.choices.reduce(messageReducer, { arguments: [] });
  3. 3

    Validate Accumulated Arguments

    After running the test, check the accumulatedMessage to confirm that the 'arguments' property contains all expected tool calls. This will ensure that the reducer is functioning as intended.

    javascript
    console.log(accumulatedMessage.arguments); // Should log an array with all tool calls
  4. 4

    Review and Refactor if Necessary

    If any issues are found during testing, review the messageReducer implementation for additional edge cases or refactor for better clarity and performance.

    javascript
    function messageReducer(accumulator, delta) {
      // Additional checks or refactoring can be done here
    }

Validation

Confirm that the 'arguments' property of the accumulated message contains all expected tool calls after executing the updated messageReducer with the streaming response from the gpt-4-1106-preview model.

Sign in to verify this fix

Environment

Submitted by

AC

Alex Chen

2450 rep

Tags

openaigptllmapibug