FG
💻 Software🤖 AI & LLMsAnthropic

stream.get_final_message() does not return the correct usage of output_tokens

Fresh3 days ago
Mar 14, 20260 views
Confidence Score50%
50%

Problem

Like the title said, `stream.get_final_message()` always return the `output_tokens` with the value of `1`. running the exmaple code examples/messages_stream.py, the output would look like: [code block] However the actual `output_tokens` should be `6` according to the raw HTTP stream response [code block] So, is this a bug or a feature? I've seen someone in issue #417 using `stream.get_final_message()` to obtain the usage information. If `output_tokens ` always returns 1, this won't work properly, I guess.

Unverified for your environment

Select your OS to check compatibility.

1 Fix

Canonical Fix
Unverified Fix
New Fix – Awaiting Verification

Fix output_tokens value in stream.get_final_message()

Medium Risk

The method stream.get_final_message() is incorrectly hardcoded to return an output_tokens value of 1, regardless of the actual token usage reported in the HTTP stream response. This likely stems from a misconfiguration in the response parsing logic or an oversight in the implementation of the method.

Awaiting Verification

Be the first to verify this fix

  1. 1

    Review stream.get_final_message() implementation

    Examine the code for the stream.get_final_message() method to identify where the output_tokens value is being set. Look for any hardcoded values or incorrect logic that leads to returning 1.

    python
    def get_final_message(self):
        # Check where output_tokens is defined
        return {'output_tokens': 1}
  2. 2

    Update output_tokens assignment

    Modify the method to correctly parse the output_tokens from the raw HTTP response instead of returning a static value. Ensure that the response is being handled properly to extract the correct token count.

    python
    def get_final_message(self):
        response = self.get_http_response()
        return {'output_tokens': response['output_tokens']}
  3. 3

    Implement unit tests

    Create unit tests for the stream.get_final_message() method to validate that it returns the correct output_tokens value based on various sample HTTP responses. This will help ensure that future changes do not break the functionality.

    python
    def test_get_final_message():
        stream = Stream()
        stream.set_mock_response({'output_tokens': 6})
        assert stream.get_final_message()['output_tokens'] == 6
  4. 4

    Deploy and monitor changes

    After implementing the fix and tests, deploy the changes to a staging environment. Monitor the application for any issues and confirm that the output_tokens value is now accurate.

Validation

Run the modified example code from messages_stream.py and verify that the output_tokens value returned by stream.get_final_message() matches the expected value from the raw HTTP stream response. Check logs for any discrepancies.

Sign in to verify this fix

Environment

Submitted by

AC

Alex Chen

2450 rep

Tags

claudeanthropicllmapi