FG

All Issues

24,993 verified issues

🤖 AI & LLMs
Fresh10 days ago

Suport Vector Sum Aggregation

First, love this extension and I'm very happy RDS started supporting it. Apologies if this exists and I missed it but it would be great to add support for an aggregate sum function. There are some use cases where it is preferable to sum embedding vectors. Additionally, it would be very convenient to be able to use this extension for general purpose vector operations (as opposed to custom array_agg functions/aggregates).

Confidence49%
49%
Candidate Fix
1 fix
🤖 AI & LLMs
Fresh10 days ago

index not used with cte

Index (ivfflat at least) are not used when the query is a cte : [code block] ==> [code block] Whereas the index is used using a subquery : [code block] [code block] Test on 0.80.0 pgvector and postgreSQL 15 (windows) Test on 0.80.0 pgvector and postgreSQL 17 (linux/docker)

Confidence49%
49%
Candidate Fix
1 fix
🤖 AI & LLMsOpenAI
Fresh10 days ago

Module '".../node_modules/zod/index"' has no default export. import type z from 'zod';

Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug I am programming an assistant using the openai library but when trying to build my code it gives the following error. [code block] Shouldn't it be better to import only the necessary types from zod instead of all types? [code block] This way, it would benfit tree-shaking as we are only importing what we need. To Reproduce 1. Install the openai library 2. Build node in watch mode Code snippets _No response_ OS Ubuntu Node version v20.15.0 Library version openai v4.56.0

Confidence49%
49%
Candidate Fix
1 fix
🤖 AI & LLMs
Fresh10 days ago

ERROR: extension "vector" has no installation script nor update path for version "0.6.2"

postgres=CREATE EXTENSION vector; ERROR: extension "vector" has no installation script nor update path for version "0.6.2" After following below instructions I got above error while adding the extension to postgres,tried every possible way but seems there issue with the v0,6.2 cd /tmp git clone --branch v0.6.2 https://github.com/pgvector/pgvector.git cd pgvector make make install

Confidence49%
49%
Candidate Fix
1 fix
🔌 APIs & SDKsTwilio
Fresh10 days ago

[Bug] `validateRequest()` is not working when a query param includes a single quote (`'`)

Issue Summary The `validateRequest()` function is not working properly when a query param value includes a single quote (`'`) (and probably more special characters) This bug seems to be introduced after this commit https://github.com/twilio/twilio-node/commit/18c6d6f184552cf85c11f1098633d8228d81bb87 Why? The quote gets escaped when using `new URL()`, and the Twilio sever seems to generate the signature with an unescaped quote [code block] Steps to Reproduce 1. Setup a call with a redirect URI that has a query param with a quote in it 2. When the call is redirected to the server, the validation does not pass 3. This will also happen if `ToState`, `FromState`, or any other query param automatically added by Twilio includes a quote, and the server returns a `307 - Temporary Redirect` to a different URL, for example `Forli'` or `Trezzo Sull'Adda` Our use case 1. A caller starts a call to the state `Trezzo Sull'Adda` 2. The caller hangs up 3. We receive the hang-up command via `POST` and respond with `307 - Temporary Redirect` to `Location: https://api.example.com` 4. `api.example.com` receives the redirect with `GET` method and body as query param instead 5. `validateRequest()` now fails because of the single quote Code Snippet [code block] Exception/Log The validation returns `false` Technical details: twilio-node version: `5.4.0` * node version: `v22.11.0`

Confidence49%
49%
Candidate Fix
1 fix
🔌 APIs & SDKsTwilio
Fresh10 days ago

[Feature Request]: WhatsApp Typing Indicator

Preflight Checklist - [x] I have read the Contributing Guidelines for this project. - [x] I agree to follow the Code of Conduct that this project adheres to. - [x] I have searched the issue tracker for a feature request that matches the one I want to file, without success. - [x] This is not a general Twilio feature request or bug report. It is a feature request for the twilio-node JavaScript package. Problem Description Hello Twilio Team, We're writing to request support for typing indicators on the WhatsApp channel. This is a critical feature for our user experience, and we’ve noticed that WhatsApp has supported this natively in their Cloud API since 2023 (Meta Docs). We understand that typing indicators are available in the Conversations SDK, but this doesn't seem to apply to WhatsApp, leaving a significant gap for us. The absence of this feature is becoming a blocker for our product roadmap and is a major factor in our long-term platform decisions. Could you share if supporting WhatsApp typing indicators is on your roadmap? Any timeline you can provide would be crucial as we evaluate our future with Twilio. Thanks for your attention to this. Proposed Solution Meta Docs Alternatives Considered Migrate to other providers Additional Information _No response_

Confidence49%
49%
Candidate Fix
1 fix
🤖 AI & LLMsAnthropic
Fresh10 days ago

Bedrock Claude 3.5 Sonnet v2 is not supporting new attachments (PDF)

I use Anthropic's Claude models via Amazon Bedrock. I wanted to try the new model `Claude 3.5 Sonnet v2` with PDF attachment support but it's not working. I am using Python and tried using both libraries `boto3` (`bedrock-runtime` client) and `anthropic.AnthropicBedrock`. I could not find any documentation given by either AWS or Anthropic on how to attach PDF file so I'm just doing hit and trials. Boto For example, I tried using Bedrock playground and it works on UI. I exported the messages as JSON (see screenshot) and then coded the same in python using `boto`. <img width="1443" alt="image" src="https://github.com/user-attachments/assets/efae6031-e4ea-45f5-81ca-934a7156118a"> [code block] It gives me this error: [code block] Maybe AWS needs to add this support in their validator. Anthropic I also tried using Anthropic SDK for Python with `AnthropicBedrock`. Took help from the code of this PR #721 and coded in python [code block] It gives me this error: [code block]

Confidence49%
49%
Candidate Fix
1 fix
🤖 AI & LLMsOpenAI
Fresh10 days ago

Unexpected token 'export' from formdata-node when using openai/shims/node

Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug Issues experienced while running unit tests in our nodejs backend. I first encountered the fetch is not defined bug, which led me to https://github.com/openai/openai-node/issues/304 and was able to move past the issue with `import 'openai/shims/node'` But now I'm receiving the following errors: [code block] I've also added [code block] But I'll still get the same `SyntaxError: Unexpected token 'export'` error. I've updated the `transformIgnorePatterns` value to include `"/node_modules/(?!formdata-node)"` as well as `"/node_modules/(?!openai/node/shims)"` and many combinations. Still no luck. yarn v1.22.4 typescript v5.1.6 jest v29.7.0 To Reproduce `yarn add openai` I've been able to reproduce it with as little as just the imports. Add the following imports to a file that has unit tests (or a logic/utils file used by a file being tested) [code block] run the tests on that file Code snippets _No response_ OS macOS Node version Node v16.20.0 (also tested with v18.19.0 same results) Library version openai v4.20.0

Confidence49%
49%
Candidate Fix
1 fix
🤖 AI & LLMsOpenAI
Fresh10 days ago

gpt-4-vision-preview does not work as expected.

Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug The response from ChatGPT unexpectedly cuts off if using stream. The response via API does not match the request through chat; through the API, I only receive the beginning of the response which unexpectedly cuts off. I think this is related to the bug below." https://github.com/openai/openai-node/issues/499 To Reproduce openai.beta.chat.completions.stream with image_url I use the following image. From API I got only `The instructions are asking for a modification of the SQL `CREATE TABLE` statement for` From chat I got much more. Code snippets [code block] OS Linux Node version Node v18.16.0 Library version openai 4.22.0

Confidence49%
49%
Candidate Fix
1 fix
🤖 AI & LLMsOpenAI
Fresh10 days ago

OpenAI client prevents process from gracefully terminating

Confirm this is a Node library issue and not an underlying OpenAI API issue - [x] This is an issue with the Node library Describe the bug When using OpenAI client to stream completions, the process fails to shutdown gracefully. To Reproduce Stream and attempt to gracefully shutdown. Active handles show instances of `agentkeepalive`, which is coming from OpenAI SDK. [code block] Code snippets [code block] OS macOS Node version 22 Library version 4.83.0

Confidence49%
49%
Candidate Fix
1 fix
🔌 APIs & SDKsTwilio
Fresh10 days ago

Support paging via nextpageuri

This is a feature request to support paging via the nextpageuri attribute included in list GET responses.

Confidence49%
49%
Candidate Fix
1 fix
🤖 AI & LLMsAnthropic
Fresh10 days ago

https_proxy not working when upgrade to v0.47.0

https_proxy not working when upgrade to v0.47.0, it works fine in v0.46.0

Confidence49%
49%
Candidate Fix
1 fix
🤖 AI & LLMsOpenAI
Fresh10 days ago

Pass word_level_timestamps option into whisper API call

Describe the feature or improvement you're requesting When I use the Whisper model directly via the openai/whisper python package (https://github.com/openai/whisper) I am able to pass the `word_timestamps` into the transcription call. See the below snippet: [code block] However, when I make an API call, I'm not able to do this. I find this a very useful feature, as by default, the timestamps in the response are rounded to the nearest second, whereas when passing in `word_timestamps=True` they're much more accurate. Could the `word_timestamps` parameter be added to the API? Additional context _No response_

Confidence49%
49%
Candidate Fix
1 fix
🔌 APIs & SDKsTwilio
Fresh10 days ago

Twilio Conference Update throws error

Issue Summary I am trying to end the Twilio conference object, but the SDK throws an ambiguous error. Steps to Reproduce 1. Start a conference [code block] 2. Have a participant join the call [code block] 3. As it is dialing, execute the following code: [code block] Desired result: The conference ends. Actual Result: The user who receives the call will be connected to an empty conference with the violin music playing. Exception/Log [code block] Technical details: twilio-node version: ^3.55.0 node version: v15.7.0

Confidence49%
49%
Candidate Fix
1 fix
🔌 APIs & SDKsTwilio
Fresh10 days ago

Error: getaddrinfo ENOTFOUND conversations.roaming.gll.twilio.com

Issue Summary I found that DNS cannot resolve the domain of the API. For Example: https://conversations.roaming.gll.twilio.com/v1/Users I modified the `getHostname` in `lib/rest/Twilio.js` and removed the edge / region to bypass the problem. Exception/Log [code block] Technical details: twilio-node version: 3.59.0 node version: v12.20.1

Confidence49%
49%
Candidate Fix
1 fix
🤖 AI & LLMsOpenAI
Fresh10 days ago

Zod => JSONSchema conversion creates references to unknown definitions

Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug When compiling a zod schema with multiple references to the same nullable object, the compiled JSONSchema refers to definitions that don't exist. This is using the latest version of the openai client with structured output support. I believe the issue comes from an extracted definition trying to reference an inner extracted definition again -- see the example below. To Reproduce Here's an example zod schema and function call which triggers the issue: [code block] When run, I get this error: [code block] I ninja'd into the source and console.log'd the generated JSON schema, here's what comes out: [code block] Code snippets _No response_ OS macOS Node version v22.2.0 Library version 4.55.3

Confidence49%
49%
Candidate Fix
1 fix
🤖 AI & LLMsAnthropic
Fresh10 days ago

Feature request: return the number of failed attempts

You allow a parameter "max_retries". It would be great to return along with the API answer the number of "failed" API calls so that people have an idea of the load. I don't know what your policy towards outside contributions is but if you agree I can implement it.

Confidence49%
49%
Candidate Fix
1 fix
🤖 AI & LLMsOpenAI
Fresh10 days ago

Structured Outputs: JSON Schema 'nullable' modifier ignored

Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug In Zod one can make a field nullable in two ways, with A) a union (eg. `authorName1` below) or using B) the .nullable() method (eg. `authorName2` below). A: `z.union([z.string(), z.null()]` B: `z.string().nullable()` The final JSON schema will also be different: A: `{ "type": ["string", "null"] }` B: `{ "type": "string", "nullable": true }` When using the later, B, the LLM (gpt-4o-2024-08-06) always return a string and never null. The former, A, works well. I would imagine B is quite a bit more common in Zod schemas so it would make sense to support it. I see many ways to fix this, including just clarifying that "nullable" has no effect in the docs. To Reproduce Consider this Zod schema, `authorName1` and `authorName2` are both nullable strings using two different syntaxes. [code block] Passing it through `zodResponseFormat`: [code block] Output: [code block] As you can see in the generated output, responseFormat also use two different syntaxes to make the two fields nullable. The model (gpt-4o-2024-08-06) seem to ignore the later `"nullable": true`. [code block] Code snippets _No response_ OS macOS and Linux Node version Node 20 Library version openai 4.58.1

Confidence49%
49%
Candidate Fix
1 fix
🤖 AI & LLMsAnthropic
Fresh10 days ago

Getting citations in unexpected format when using web fetch tool

When using the newly launched web fetch tool, I am getting citations in unexpected format [code block] As you can see `citations=None` and weird xml tags `<cite>` are present in the text content. Is this the expected behaviour or is there a bug ?

Confidence49%
49%
Candidate Fix
1 fix
🔌 APIs & SDKsTwilio
Fresh10 days ago

twilio.webhook() doesn't work with subaccounts

Issue Summary twilio.webhook() doesn't work with subaccounts...validation always fails because there is only one TWILIO_AUTH_TOKEN for the route which we have set to the primary account. Our setup will have over 25K subaccounts. Steps to Reproduce Create a subaccount under your primary accont, provision a number to the subaccount. Send a message to this phone number. The webhook will fail validation because the auth token is for the primary account.

Confidence48%
48%
Candidate Fix
1 fix
← PrevPage 98 of 1250Next →