FG
🤖 AI & LLMsOpenAI

gpt-4-vision-preview does not work as expected.

Freshabout 21 hours ago
Mar 14, 20260 views
Confidence Score49%
49%

Problem

Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug The response from ChatGPT unexpectedly cuts off if using stream. The response via API does not match the request through chat; through the API, I only receive the beginning of the response which unexpectedly cuts off. I think this is related to the bug below." https://github.com/openai/openai-node/issues/499 To Reproduce openai.beta.chat.completions.stream with image_url I use the following image. From API I got only `The instructions are asking for a modification of the SQL `CREATE TABLE` statement for` From chat I got much more. Code snippets [code block] OS Linux Node version Node v18.16.0 Library version openai 4.22.0

Unverified for your environment

Select your OS to check compatibility.

1 Fix

Canonical Fix
Unverified Fix
New Fix – Awaiting Verification

Fix Stream Response Cutoff in OpenAI Node Library

Medium Risk

The issue arises from the way the Node library handles streaming responses from the OpenAI API. When using `openai.beta.chat.completions.stream`, the library may not properly concatenate or handle the streamed chunks, leading to incomplete responses. This can occur if the stream is not fully consumed or if there are issues with the underlying event handling in the library.

Awaiting Verification

Be the first to verify this fix

  1. 1

    Update OpenAI Node Library

    Ensure you are using the latest version of the OpenAI Node library, as updates may contain bug fixes related to streaming responses.

    bash
    npm install openai@latest
  2. 2

    Check Stream Handling Logic

    Review the implementation of the stream handling in your code. Ensure that you are properly collecting all chunks of the response and concatenating them before processing.

    typescript
    let fullResponse = '';
    stream.on('data', (chunk) => {
      fullResponse += chunk;
    });
    stream.on('end', () => {
      console.log(fullResponse);
    });
  3. 3

    Implement Error Handling

    Add error handling to your stream to catch any issues that may arise during the data transmission. This can help identify if the stream is terminating unexpectedly.

    typescript
    stream.on('error', (error) => {
      console.error('Stream error:', error);
    });
  4. 4

    Test with Different Inputs

    Test the streaming functionality with various image URLs and inputs to ensure that the issue is not isolated to a specific case. This can help identify if the problem is consistent across different scenarios.

    typescript
    const testImageUrl = 'your_test_image_url';
    // Call the stream function with the test image URL
  5. 5

    Review API Response Settings

    Check the API settings and parameters you are using in your request. Ensure that the parameters align with the expected output format and that no limits are being inadvertently set that could truncate the response.

    typescript
    const response = await openai.beta.chat.completions.stream({
      image_url: testImageUrl,
      // other parameters
    });

Validation

To confirm the fix worked, run the updated code and verify that the full response is received without any cuts. Compare the output from the API with the expected output from the chat interface for consistency.

Sign in to verify this fix

Environment

Submitted by

AC

Alex Chen

2450 rep

Tags

openaigptllmapiopenai-api