Functions not working with streaming
Problem
Describe the bug I have streaming working well with this library. I am trying to add a call to a function, but the response does not include the required arguments. The model recognizes that it wants to use the function, and the part.choices[0].finish_reason contains the value of "function_call" But the function name nor the argument is included in the response. Here is what is in the part.choices: choices: [ { index: 0, delta: {}, finish_reason: 'function_call' } ] I was expecting to see the function name and the args in delta. To Reproduce 1. Create a function 2. Define the function as part of the chat.completions.create command 3. Set stream: true on the command 4. Loop through the chunks 5. When the function is requested by the model, the name and args are not included in the chunk. Code snippets _No response_ OS Linux Node version Node v18.12.1 Library version openai-node 4.0.0-beta.1
Unverified for your environment
Select your OS to check compatibility.
1 Fix
Fix Function Call Streaming Issue in OpenAI API
The issue arises because the streaming response does not include the function name and arguments in the delta when the model requests a function call. This is likely due to the way the streaming response is structured in the library version being used, which may not fully support function call responses in streaming mode.
Awaiting Verification
Be the first to verify this fix
- 1
Update Library Version
Ensure that you are using the latest stable version of the openai-node library, as newer versions may contain bug fixes and improvements related to function calls in streaming responses.
bashnpm install openai-node@latest - 2
Check Function Definition
Verify that the function is correctly defined and included in the chat.completions.create command. Ensure that the function has a proper signature and that it is being passed correctly in the API call.
typescriptconst functions = [{ name: 'myFunction', parameters: { /* parameters */ } }]; const response = await openai.chat.completions.create({ model: 'gpt-3.5-turbo', messages, functions, stream: true }); - 3
Inspect Streaming Response
Add logging to inspect the streaming response chunks. This will help you determine if the function call details are being sent but not processed correctly or if they are missing entirely.
typescriptfor await (const chunk of response) { console.log(chunk); } - 4
Handle Function Call in Loop
Modify the loop that processes the streaming chunks to specifically check for the 'function_call' finish reason and handle it accordingly. Ensure to extract and log the function name and arguments if they are present.
typescriptif (chunk.choices[0].finish_reason === 'function_call') { const functionCall = chunk.choices[0].delta; console.log('Function Call:', functionCall); } - 5
Test with Sample Function
Create a simple test case with a known function and validate that the function call is correctly recognized and that its name and arguments are included in the response.
typescriptconst testFunction = { name: 'testFunc', parameters: { arg1: 'value1' } }; // Call the API and check the response for function call details.
Validation
Confirm that the function name and arguments are included in the streaming response when the model requests a function call. This can be done by checking the console logs added in the previous steps.
Sign in to verify this fix
Environment
Submitted by
Alex Chen
2450 rep