FG

Anthropic

71 verified issues

๐Ÿค– AI & LLMsAnthropic
Freshabout 1 year ago

Internal Server Error when sending multiple PDFs in conversation

Description When attempting to have a conversation that includes multiple PDF documents with Claude, the API returns a 500 Internal Server Error. The error occurs specifically when trying to send a second PDF after receiving a response for the first one. Steps to Reproduce 1. Initialize a conversation with Claude API 2. Send first message containing: - PDF document (base64 encoded) - Text prompt ("1์ค„ ์š”์•ฝ") 3. Receive assistant's response 4. Send second message with: - Different PDF document (base64 encoded) - Text prompt ("1์ค„์š”์•ฝ") 5. API returns 500 Internal Server Error Code Example [code block] Error Message [code block] Environment - Python SDK Version: (version number) - Claude Model: claude-3-5-sonnet-20241022 - Beta Features: ["pdfs-2024-09-25"] Expected Behavior The API should handle multiple PDF documents in a conversation, allowing for sequential analysis of different documents. Additional Context - The first PDF upload and response works correctly - The error occurs specifically when trying to send a second PDF in the conversation - Using the latest beta PDF feature as indicated in the `betas` parameter Questions 1. Is there a limitation on the number of PDFs that can be processed in a single conversation? 2. Is there a specific way to handle multiple PDF documents in a conversation that I'm missing? 3. Could this be related to the beta status of the PDF feature?

Confidence89%
89%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Freshover 2 years ago

ImportError: cannot import name 'ModelField' from 'pydantic.fields'

Hello. I have import error when I am trying to use Anthropic. [code block] [code block] [code block]

Confidence89%
89%
Candidate Fix
1 fixโœ“ 2 verified
๐Ÿค– AI & LLMsAnthropic
Freshover 2 years ago

Invalid API Key when using claude 2.0

Windows 11 with Pythno 3.10 and used these codes . It turned out to an "Invalid API Key" error. But I'm sure the api_key is good because I could get good response via unofficial API call (from other github repository). from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT anthropic = Anthropic(api_key = "sk-ant-XXXXXX") def getResponse(prompt): msg=f"{HUMAN_PROMPT} {prompt} {AI_PROMPT}" print(msg) completion = anthropic.completions.create( model = "claude-2", max_tokens_to_sample = 30000, prompt = msg, ) res = completion.completion print(res) return res if __name__ == "__main__": getResponse("Hello, Claude") the last 3 lines of error messages: File "D:\Python310\anthropic\lib\site-packages\anthropic\_base_client.py", line 761, in _request raise self._make_status_error_from_response(request, err.response) from None anthropic.AuthenticationError: Error code: 401 - {'error': {'type': 'authentication_error', 'message': 'Invalid API Key'}} Appreciate your help. Thanks.

Confidence89%
89%
Candidate Fix
1 fixโœ“ 2 verified
๐Ÿค– AI & LLMsAnthropic
Freshover 1 year ago

Batch API does not support cache_control

Batch API failed with error message > messages.5.content.0.text.cache_control: Extra inputs are not permitted I have both indicated `betas` in `params` and `client.beta.messages.batches.create` My prompts look like [code block]

Confidence88%
88%
Candidate Fix
1 fixโœ“ 2 verified
๐Ÿค– AI & LLMsAnthropic
Freshover 2 years ago

Incremental Streaming vs Culimative Streaming

I noticed that Anthropics APIs use cumulative streaming in the completion endpoint, resulting in repetitive data being sent over the wire. Is there a reason for this design choice? I imagine Incremental streaming is typically preferred for efficiency. I see this is on the endpoint and not a python or ts issue. so this could impact existing customers.

Confidence88%
88%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Freshover 2 years ago

ImportError: cannot import name 'Anthropic' from 'anthropic'

Trying to run the basic code for Anthropic and getting this error: AttributeError: module 'anthropic' has no attribute 'Anthropic' Using anthropic == 0.3.6 Code in my notebook: from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT anthropic = Anthropic( defaults to os.environ.get("ANTHROPIC_API_KEY") api_key='replaced with my actual api key', ) completion = anthropic.completions.create( model="claude-2", max_tokens_to_sample=300, prompt=f"{HUMAN_PROMPT} how does a court case get to the Supreme Court? {AI_PROMPT}", ) print(completion.completion)

Confidence87%
87%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Freshover 2 years ago

Memory leak while using anthropic python sdk 0.3.10

We have a server implemented with FastAPI to cal Anthropic through Python, but when run the following experiments, the memory kept increasing and not released after stop sending request to the server. - a client sending http requests continuously to the server - concurrency 20 - running duration 2mins Sample server code logic [code block] When I replaced the Anthropic python sdk with httpx request directly, the memory kept low and stable during the same experiment. Same issue when using 0.3.11 as well. Would be great if you can help to take a look.

Confidence87%
87%
โœ“ Verified Fix Available
1 fixโœ“ 3 verified
๐Ÿค– AI & LLMsAnthropic
Freshabout 1 year ago

OpenAi Compatiblity using Claude API Key

Hi, I don't know if this is the right place to ask this but Claude API is one of the best for code generation. But in some Multi Agents framework they use openai compatible codes to run those agents. So is there any future release of APIs for OpenAI compatiblity ? Coz I believe the URL is the only thing which is not giving the answer.

Confidence87%
87%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Freshover 1 year ago

httpx.PoolTimeout, fixed by disabling HTTP keepalive

This is a deeply odd error, insofar as it's something that happens 100% of the time only when my code is invoking Anthropic (via client libraries 0.28.0) from our CI environment (hosted in AWS us-east-2). We get a `anthropic.APITimeoutError: Request timed out`, thrown from a `httpx.PoolTimeout`, thrown from a `httpcore.PoolTimeout` during a generally unremarkable test case. The following change in usage/invocation prevents it entirely: [code block]

Confidence87%
87%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Fresh3 months ago

Adding extra_headers for the anthropic web-fetch tool causes the structured_output example to fail

Adding the `extra_headers` and `tools` options in the`messages.create` call causes the call with `output_format` to fail. I am using version 0.75 of the sdk. I took the example from here: https://platform.claude.com/docs/en/build-with-claude/structured-outputs#quick-start and added these two options in the create call [code block] This causes the example to fail with this error [code block] Are these options incompatible with structured output. I'm not sure this is an sdk error or an error in the api itself, please close if not appropriate.

Confidence87%
87%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Fresh12 months ago

anthropic[vertex] has missing `requests` dependency

I install anthropic[vertex] [code block] Then write this code: [code block] It is running into the following exception: [code block] Looks like google-auth needs requests library: https://github.com/googleapis/google-auth-library-python/blob/3fae8f8368d4651cd11d4af3d80f687eab033175/google/auth/transport/requests.py#L28 google-auth defined "requests" extra: https://github.com/googleapis/google-auth-library-python/blob/3fae8f8368d4651cd11d4af3d80f687eab033175/setup.py#L33 But anthropic[vertex] uses plain google-auth dependency: https://github.com/anthropics/anthropic-sdk-python/blob/a3c59fc77610122a302aec1e7a2a59bbce94dbb2/pyproject.toml#L39

Confidence87%
87%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Fresh5 months ago

AttributeError: module 'anthropic' has no attribute 'beta'

Hi , I am testing the Anthropic API / MCP integration in Python SDK. I received the error: "AttributeError: module 'anthropic' has no attribute 'beta' Have you run into this ? Any idea if the Python SDK is ready for beta messages API ? my code: [code block] Reference doc/example: https://docs.anthropic.com/en/docs/agents-and-tools/mcp-connector Anthropic SDK version: Version: 0.62.0 Anthropic SDK libraries / namespaces : `['AI_PROMPT', 'APIConnectionError', 'APIError', 'APIResponse', 'APIResponseValidationError', 'APIStatusError', 'APITimeoutError', 'Anthropic', 'AnthropicBedrock', 'AnthropicError', 'AnthropicVertex', 'AsyncAPIResponse', 'AsyncAnthropic', 'AsyncAnthropicBedrock', 'AsyncAnthropicVertex', 'AsyncClient', 'AsyncMessageStream', 'AsyncMessageStreamManager', 'AsyncStream', 'AuthenticationError', 'BadRequestError', 'BaseModel', 'BetaAsyncMessageStream', 'BetaAsyncMessageStreamManager', 'BetaContentBlockStopEvent', 'BetaInputJsonEvent', 'BetaMessageStopEvent', 'BetaMessageStream', 'BetaMessageStreamEvent', 'BetaMessageStreamManager', 'BetaTextEvent', 'Client', 'ConflictError', 'ContentBlockStopEvent', 'DEFAULT_CONNECTION_LIMITS', 'DEFAULT_MAX_RETRIES', 'DEFAULT_TIMEOUT', 'DefaultAioHttpClient', 'DefaultAsyncHttpxClient', 'DefaultHttpxClient', 'HUMAN_PROMPT', 'InputJsonEvent', 'InternalServerError', 'MessageStopEvent', 'MessageStream', 'MessageStreamEvent', 'MessageStreamManager', 'NOT_GIVEN', 'NoneType', 'NotFoundError', 'NotGiven', 'Omit', 'PermissionDeniedE

Confidence87%
87%
Candidate Fix
1 fixโœ“ 2 verified
๐Ÿค– AI & LLMsAnthropic
Freshover 2 years ago

Model defaults to version 2.1 even when specified to claude-2

Hello and thank you for providing access to the SDK and Claude APIs. Two notes: 1. Pydantic Versioning: The Anthropics SDK currently has a hard dependency on Pydantic version 1.0, which causes other modules, of which are dependent on higher versions, to break. 2. Model Versioning in Completion API: When using the Python SDK's completion API, it appears that the specified model version is not being honored. Despite selecting "claude-2" as the model, the API defaults to version 2.1. Below are the relevant version details and execution code: Version Information: [code block] Execution Code: [code block] Response: [code block]

Confidence87%
87%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Freshover 1 year ago

AnthropicVertex stream chat generation is taking too much time

Recently, i have started using AnthropicVertex instead of direct anthropic. When I try to generate some data through AnthropicVertex client, it is taking around 2s to start streaming. However, in case of direct anthropic, it is not taking this much time. Also 2s duration is random, sometime it takes quite large amount of time and goes upto 6-10s. In worse case, it goes upto 20s. So, is there any que kind of stuff? I am using same code given in vertex ai anthropic notebook to generate responses. Is there any workaround which i need to complete to get response as fast as direct anthropic? If someone could guide me on this, it would be really helpful. Thanks !!

Confidence86%
86%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Freshover 2 years ago

Tokenizer Error

When I run: pip install anthropic Then: from anthropic import Anthropic client = Anthropic() client.count_tokens('Hello world!') 3 I get the error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".pyenv\pyenv-win\versions\3.11.3\Lib\site-packages\anthropic\_client.py", line 225, in count_tokens tokenizer = self.get_tokenizer() ^^^^^^^^^^^^^^^^^^^^ File ".pyenv\pyenv-win\versions\3.11.3\Lib\site-packages\anthropic\_client.py", line 230, in get_tokenizer return sync_get_tokenizer() ^^^^^^^^^^^^^^^^^^^^ File ".pyenv\pyenv-win\versions\3.11.3\Lib\site-packages\anthropic\_tokenizers.py", line 33, in sync_get_tokenizer text = tokenizer_path.read_text() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".pyenv\pyenv-win\versions\3.11.3\Lib\pathlib.py", line 1059, in read_text return f.read() ^^^^^^^^ File ".pyenv\pyenv-win\versions\3.11.3\Lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 1980: character maps to <undefined> I'm on a windows machine and setting my environment variables -> system settings PYTHONUTF8 to 1 didn't work either.

Confidence86%
86%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Freshalmost 2 years ago

Unexpected failure when `None` is passed for optional API endpoints, due to `NotGiven` class

I was surprised to see an error when creating a response like so [code block] It works if I omit the system kwarg entirely. But it's generally intuitive to use None for higher-order functions where kwargs are optional, but it fails here. I've not seen this kind of type before, and would've hoped that None would be available, too: [code block]

Confidence86%
86%
Candidate Fix
1 fixโœ“ 2 verified
๐Ÿค– AI & LLMsAnthropic
Freshover 2 years ago

Claude-2 100k model never outputs more than 4k tokens

Even though cloude-2 has 100k tokens context window I can not get it to generate more than 4k tokens. It interrupts generation after exactly 4096 tokens. Even if I set `max_tokens_to_sample` to be more than 4096. I tried both anthropic-sdk-python and web interface - both interrupt in the same place. Code to reproduce: [code block] If you run that code you can see, that completion interrupted mid-sentense after reaching 4096 tokens: [code block] It is the last chars of the output. And in developer log on anthropic page it looks like that: <img width="839" alt="Screenshot 2023-09-15 at 11 00 37" src="https://github.com/anthropics/anthropic-sdk-python/assets/120242470/6be42ad7-4038-41eb-9b4d-67c5d8aaa3d4"> So that 100k tokens only for input, and only 4k of them for output? It is very unexpected (because openai models not like that) and not documented behaviour.

Confidence86%
86%
Candidate Fix
1 fixโœ“ 2 verified
๐Ÿค– AI & LLMsAnthropic
Freshalmost 2 years ago

SDK returns empty text blocks, but does not accept them

Using anthropic sdk via langchain_anthropic integration [code block] The SDK can return empty text blocks from models: [code block] However, the SDK does not accept empty text block if re-interacting with the model (e.g., if using an agent).

Confidence86%
86%
Candidate Fix
1 fixโœ“ 2 verified
๐Ÿค– AI & LLMsAnthropic
Freshabout 2 years ago

Anthropic's dependent library "Anyio" is incompatible with Gunicorn worker class "Eventlet" or "gevent"

I have a python application running using eventlet for serving requests. I also need to install anthropic, but it never settles with the application because of this async library "Anyio". Can't anthropic work without "Anyio" ? We can externally provide the asynchronous feature right ?

Confidence86%
86%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Fresh8 months ago

Test suite fails with pytest 8

On openSUSE Tumbleweed when we try and run the test suite, we are seeing failures with the following errors. [code block]

Confidence86%
86%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Freshabout 1 year ago

Suddenly stopped working: Got an unexpected keyword argument 'proxies'

I have a a script that uses multiple LLMs such as gemini, gpt and claude. I am actually not using claude but the code is still there in case I want. Now, while updating the google genai sdk with pip install google-genai, Claude suddenly stopped working. I haven't touched the part of Claude. The only thing I did was the pip install for google-genai. So, not sure if that is at all related but besides that I have not done any changes. Using version 0.39.0 of anthropic [code block] Inside llm_configs I have this: [code block] If I uncomment CLAUDE_CLIENT it works.

Confidence86%
86%
Candidate Fix
1 fixโœ“ 2 verified
๐Ÿค– AI & LLMsAnthropic
Freshalmost 2 years ago

AnthropicBedrock does not expose `with_options`

Hi all, I'm currently in the process of migrating from `anthropic_bedrock` to `anthropic` and noticed that `client.with_options()...` is not exposed for clients of type `anthropic.AnthropicBedrock`. For clients of the old bedrock library `anthropic_bedrock.AnthropicBedrock.with_options()...` was properly exposed. Are there any plans to do so and bring the functionality of `anthropic.AnthropicBedrock` more inline with `anthropic.Anthropic`? Best

Confidence85%
85%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Fresh11 months ago

RecursionError in AsyncAnthropic from very large number of retries

I'm using a large number of max_retries. This can result in a stack overflow because the retry is done recursively: https://github.com/anthropics/anthropic-sdk-python/blob/main/src/anthropic/_base_client.py#L1662 The stack trace ends with: [code block]

Confidence85%
85%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Freshalmost 2 years ago

Casting to InputSchema drops $defs, causing an InternalServerError

Consider a tool-calling request with the following schema: [code block] Anthropic correctly handles this so long as `$defs` is present; should it be dropped, the `enum` fields become unresolvable, causing an error of the form of: [code block] Unfortunately, using `pydantic.TypeAdapter(anthropic.types.beta.tools.tool_param.InputSchema).validate_python(tool.model_json_schema())` drops the `$defs` field from the representation -- after getting the aforementioned HTTP 500 error, if we walk the stack up to `anthropic._base_client.post`, we see a body where `$defs` is not present at all, despite references to it existing: [code block] ...thus explaining the HTTP 500. And, indeed, if the use of `pydantic.TypeAdapter(InputSchema).validate_python(tool.model_json_schema())` is replaced with `typing.cast(InputSchema, tool.model_json_schema())`, the issue is resolved. This is unfortunate: Users of this library should not need to give up type safety to avoid triggering server-side errors.

Confidence85%
85%
Candidate Fix
1 fixโœ“ 1 verified
๐Ÿค– AI & LLMsAnthropic
Freshover 1 year ago

Iterable vs List in the request datatypes

Consider the definition of `MessageParam`: [code block] The `Iterable` here doesn't allow me to write any kind of useful transformations over the datatype without changing its state in the process: [code block] which is quite inconvenient and could easily lead to all kind of errors. Is there any reason to use `Iterable` here and in other request/response datatypes? Could it be replaced with `List`?

Confidence85%
85%
Candidate Fix
1 fixโœ“ 2 verified
๐Ÿค– AI & LLMsAnthropic
Fresh8 months ago

Error result: Streaming is strongly recommended for operations that may take longer than 10 minutes.

What's problem? Error result: Streaming is strongly recommended for operations that may take longer than 10 minutes. See https://github.com/anthropics/anthropic-sdk-python#long-requests for more details

Confidence84%
84%
Candidate Fix
1 fixโœ“ 2 verified
๐Ÿค– AI & LLMsAnthropic
Freshover 1 year ago

Is there a Rust SDK?

I couldn't find one, so I'm making one, based on this. Let me know if one already exists, and/or you're interested. Thanks!

Confidence84%
84%
Candidate Fix
1 fixโœ“ 2 verified
๐Ÿค– AI & LLMsAnthropic
Freshover 1 year ago

Please add an endpoint for getting account balance.

Please add an endpoint for getting account balance. e.g. `/balance` or `/account_details`. Many of us are creating products that use Anthropic's HTTP API where the user provides their API key. A good UX would be for the user to see their remaining balance. A user that is prepaying to their account will suffer a bad UX when their balance runs out. As an analogy, when my laptop battery is low, my machine doesn't suddenly power down while I am in the middle of a task. Instead I can observe the battery charge on my system tray, and a notification appears when it reaches 5%. This is a healthy UX. PS: It would be nice to have an `/account_details` endpoint that includes a list of timestamped API calls and their associated costs and IPs, as well as any other information that the user can retrieve from the web portal. However I can see this might be contentious when it comes to granting API keys for others to use. You'd have to tick checkboxes when creating an API key to grant individual functionalities. And that would be nice. If I could create an API key, uncheck "master-mode", set a per-day limit of $5, that would be nice. I could provide a capped API key to each user when I distribute my product. Or I could grant API keys to my team. And if I could create a "master-mode" API key for my own use, which lets me view my account details as well as details of any other API key I have generated, that would be nice. But this starts to become a medium-to-large sized task. Just p

Confidence80%
80%
โœ“ Verified Fix Available
1 fixโœ“ 5 verified
๐Ÿค– AI & LLMsAnthropic
Fresh5 months ago

AI agent loses conversation context after tool call in multi-turn chat

In a multi-turn AI agent loop, the conversation history is not properly maintained between tool calls. After the model calls a tool, the developer appends only the tool result to the messages array without including the original assistant message that contained the tool_use block. The API then returns a validation error because the tool_result message has no preceding tool_use to reference.

Confidence79%
79%
โœ“ Verified Fix Available
1 fixโœ“ 10 verified
๐Ÿค– AI & LLMsAnthropic
Fresh11 days ago

Claude API tool use: response content block is text, not tool_use, causing type error

When using Claude's tool use feature, code that assumes every response contains a tool_use content block crashes when Claude decides to respond with a text message instead of calling a tool. This happens when the model determines a tool call is not needed (e.g. the answer is in its context), or when max_tokens is hit before the tool call completes. The content array must always be checked for block type before accessing tool_input.

Confidence78%
78%
โœ“ Verified Fix Available
1 fixโœ“ 10 verified
๐Ÿค– AI & LLMsAnthropic
Freshalmost 2 years ago

API client lacks a messages attribute

๐Ÿ‘‹ I'm trying to use the Claude API and I can't even get the client to work properly following the SDK example to the character. Anything I'm doing wrong? <img width="646" alt="image" src="https://github.com/anthropics/anthropic-sdk-python/assets/55989773/2d2da778-7c4e-4328-86db-e57da305a619">

Confidence75%
75%
โœ“ Verified Fix Available
1 fixโœ“ 3 verified
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

Allow System Prompt Within Messages As Opposed to Top Level Argument

Most LLM providers (including OpenAI sdk) support "system" as a role within the `messages` argument. allowing `messages` to support this would make it much easier for developers to switch/use your models.

Confidence55%
55%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

Empty args/inputs when turning streaming on and setting tool choice to any

Description When calling the Anthropic client with `streaming=True` and `tool_choice={"type": "any"/"tool" }` the output returns a tool call but with empty args. This is problematic for a few other reasons beyond no args being returned. For example, quite a few packages rely on the `anthropic-sdk`, one of which is `langchain-anthropic` (ref). Expected response I would expect that the output includes the inputs/args required for the tool call when streaming. Reproduction steps I've added a notebook to highlight some things: https://gist.github.com/kwnath/f42737c023767d5effdcca20cb5bd0a6

Confidence52%
52%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

How to achieve an effect similar to uploading file attachments on the web page in the API?

I now have some plain text files to process, but I found that if I upload attachments on the web page, the response I get is significantly better than the results obtained by using the API. I would like to know if there is any way to make the results of the API look like the results of the web page? The format of the prompt I send in the API is roughly like this: [code block] What I send on the web page is this: "{Introduction}", and then upload the file as an attachment. My understanding is that there should be a fixed format on the web page to connect the file content and my prompt to get better results. Can I get this splicing format? If you need the text of the {introduction} part or specific text files involved here, please contact me, thank you very much! This problem has bothered me for several days, I am looking forward to your reply :)

Confidence51%
51%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

`pip install anthropic` currently fails on Python 3.13

One of the `anthropic` dependencies is not yet available for Python 3.13 - `tokenizers` needs `pyo3-ffi` and that's not on Python 3.13 yet: - https://github.com/PyO3/pyo3/issues/4554 This means nothing that runs Python 3.13 can `pip install anthropic` right now. Is the `tokenizers` dependency really necessary? I think it's only there for a VERY old count tokens feature which isn't actually that useful because it still uses the Claude 2 tokenizer: https://github.com/anthropics/anthropic-sdk-python/blob/cd80d46f7a223a5493565d155da31b898a4c6ee5/src/anthropic/_client.py#L270-L286

Confidence50%
50%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

Exception: EOF while parsing a value at line 1 column 0

Hello, I'm on Windows and installed with `pip install git+https://github.com/anthropics/anthropic-sdk-python.git` Seeing this issue when issuing a standard completion prompt. This is the command: [code block] Here is the complete error: [code block]

Confidence50%
50%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

Vertex "Could not resolve API token from the environment"

<img width="627" alt="Screenshot 2024-07-10 at 4 56 29โ€ฏPM" src="https://github.com/anthropics/anthropic-sdk-python/assets/44094672/470faa05-144f-4ab8-892d-5ac8d839a2c5"> For some reasons after the update, Vertex now keeps asking for API Tokens but we are using GCP service account to authorize VertexAI and it was fine until this morning. Did the new release change or require something new?

Confidence50%
50%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

bedrock: How to tokenizer and count tokenizer?

It seems that bedrock has lost a lot of features. such as: How to do tokenization and count tokenizer? And is there any api guide? I can find 2 simple code examples.

Confidence50%
50%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

stream.get_final_message() does not return the correct usage of output_tokens

Like the title said, `stream.get_final_message()` always return the `output_tokens` with the value of `1`. running the exmaple code examples/messages_stream.py, the output would look like: [code block] However the actual `output_tokens` should be `6` according to the raw HTTP stream response [code block] So, is this a bug or a feature? I've seen someone in issue #417 using `stream.get_final_message()` to obtain the usage information. If `output_tokens ` always returns 1, this won't work properly, I guess.

Confidence50%
50%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

Opus 4.6 with adaptive thinking: Redundant text block (text -> think -> text)

[code block] [code block] Expected: [code block] There is no obvious way to distinguish the first text block.

Confidence50%
50%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

File descriptor leak when using Anthropic client

I have tested anthropic 0.3.11 and 0.2.9 , both might have File descriptor leak . code 1 : [code block] code 2 : [code block] Both code led to File descriptor leak eventually after 5000-10000 loops . I ever tried to remove the retry and the same File descriptor leak happened . When changed to use [code block] everything is fine . no File descriptor leak at all .

Confidence49%
49%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

Bedrock Claude 3.5 Sonnet v2 is not supporting new attachments (PDF)

I use Anthropic's Claude models via Amazon Bedrock. I wanted to try the new model `Claude 3.5 Sonnet v2` with PDF attachment support but it's not working. I am using Python and tried using both libraries `boto3` (`bedrock-runtime` client) and `anthropic.AnthropicBedrock`. I could not find any documentation given by either AWS or Anthropic on how to attach PDF file so I'm just doing hit and trials. Boto For example, I tried using Bedrock playground and it works on UI. I exported the messages as JSON (see screenshot) and then coded the same in python using `boto`. <img width="1443" alt="image" src="https://github.com/user-attachments/assets/efae6031-e4ea-45f5-81ca-934a7156118a"> [code block] It gives me this error: [code block] Maybe AWS needs to add this support in their validator. Anthropic I also tried using Anthropic SDK for Python with `AnthropicBedrock`. Took help from the code of this PR #721 and coded in python [code block] It gives me this error: [code block]

Confidence49%
49%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

https_proxy not working when upgrade to v0.47.0

https_proxy not working when upgrade to v0.47.0, it works fine in v0.46.0

Confidence49%
49%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

Feature request: return the number of failed attempts

You allow a parameter "max_retries". It would be great to return along with the API answer the number of "failed" API calls so that people have an idea of the load. I don't know what your policy towards outside contributions is but if you agree I can implement it.

Confidence49%
49%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

Getting citations in unexpected format when using web fetch tool

When using the newly launched web fetch tool, I am getting citations in unexpected format [code block] As you can see `citations=None` and weird xml tags `<cite>` are present in the text content. Is this the expected behaviour or is there a bug ?

Confidence49%
49%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

AttributeError: module 'socket' has no attribute 'TCP_KEEPINTVL' on Windows and macOS

I'm trying to use the anthropic-sdk on both Windows and macOS, but I'm encountering the following error on both platforms: [code block] After investigating, I found that the socket.TCP_KEEPINTVL attribute isn't available on all platforms, particularly Windows and macOS. To resolve the issue locally, I modified _base_client.py as follows: [code block] With this change, the SDK works as expected on both platforms. Would this be a valid fix to submit as a PR? Happy to open one if this approach looks good.

Confidence48%
48%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

Source distribution tarballs on pypi do not contain README.md, causing hatch-fancy-pypi-readme plugin to fail

The source distributions published to pypi presently cannot be built; attempts fail with: [code block] Inspecting the tarball to list non-`.py` content shows only: [code block] Notably, there is indeed no README.md file present.

Confidence48%
48%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

nest_asyncio hangs AsyncAnthropic

When running with `nest_asyncio.apply()` the AsyncAnthropic execution keeps staying in the thread [code block]

Confidence48%
48%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

Count Tokens for Bedrock

I am getting following when I am trying to calculate tokens in bedrock. AttributeError: 'AnthropicBedrock' object has no attribute 'count_tokens' It seems like there is no implementation for it. Thanks.

Confidence48%
48%
Candidate Fix
1 fix
๐Ÿค– AI & LLMsAnthropic
Freshabout 19 hours ago

Support Pydantic >= 2.0.0 ?

Hello, I noticed that the project is currently depending on pydantic version "^1.9.0". Given that pydantic is already at 2.1.1, I was wondering if there are any plans to upgrade your dependencies for this project. Thanks!

Confidence48%
48%
Candidate Fix
1 fix