All Issues
24,993 verified issues
Multi-vector search
Hi, I'm wondering if pgVector support complex multi-vector search similar to what MS does in https://github.com/microsoft/MSVBASE? Our app is one of those that requires search across multiple vectors and scalers. We're researching new options in addition to Milvus hybrid search. Vbase from MS promises superior performance but it does not look mature enough.
[Feature Request] Add XML Comment into TwiML
Feature Request [code block] This should output XML comment into the output [code block]
`/var/task/node_modules/openai/_shims/agent-node.mjs` import issue in T3 stack with Vercel Edge function
Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug The SDK throws an import error with the default T3 stack config (https://create.t3.gg/) in Edge functions with the `pages` router. First reported in LangChainJS, reproed by me with the SDK by itself: https://github.com/hwchase17/langchainjs/issues/2558 Here is the trace: [code block] To Reproduce 1. Set up an app using the T3 Stack: https://create.t3.gg/ 2. Set up an endpoint under `pages/api/route.ts` 3. Import and use the OpenAI SDK (code below) 4. Deploy to Vercel 5. Ping the endpoint Code snippets [code block] OS Vercel Edge function Node version Edge Library version openai v4.5.0
Unhandled request error event (ETIMEDOUT)
Issue Summary Very rarely, I'm seeing an unhandled error event (terminating the Node process) with the message/code `ETIMEDOUT` being emitted in a piece of code that communicates with the Twilio API. The code in question is calling `users(id).userChannels.list()` in this case, but I don't think the specific endpoint actually matters. The event is emitted by the `request` module used by `twilio-node`. I'm not entirely sure if this is a bug in twilio-node or rather in request, but it seems to me that maybe an event handling function should be attached to the `http` call in `/lib/base/RequestClient.js`, which could catch sporadic error events, and reject the returned promise in those cases. The request docs only mention handling events in the context of streams, but this issue looks somewhat related. Steps to Reproduce Happens once in a blue moon, so not exactly easily reproducible. Exception/Log [code block] (no mention of twilio-node here, but it is the only dependency in this context that uses request) Technical details: twilio-node version: 3.39.3 node version: v12.13.1
Question Setting ef_search to different values does not affect number of results retrieved (Django 4.2 + PGVector + HNSW index)
Hi all, I have this simple Django model setup: [code block] I am trying to do a basic vector search (cosine) on the model with a simple question/embedding with the following code: [code block] No matter what i set ef_search to, whether it be 1, 40, 60, 100, I always get the same number of results. What is the correct way to set ef_search? Regards, Rob
Cannot call method 'split' of undefined
I'm a newbie to `node-http-proxy` module. my aim I need to use the module provide multi-SSL for multi-subdomain. For example; if a user call `process.localhost:1443` then I should route the call to `process.localhost:2443` and if a user call `api.localhost:1443` then I should route the call to `api.localhost:3443` what's happening I wrote the below server.js codes. _If I change `httpProxy.createServer(options)` line with `httpProxy.createServer({target:'http://process.localhost:2443'})` then it works properly!_ <br> Otherwise when I try to call `process.localhost:1443` I get the following error;<br> `D:\Work Space\...\http-proxy\node_modules\requires-port\index.js:13` `protocol = protocol.split(':')[0];` `TypeError: Cannot call method 'split' of undefined` `protocol` seems as `undefined`. [code block] What should I do? <br><br> server.js [code block]
bug? - unexpected data beyond EOF in block
While using `pgvector` on a table with frequent updates / inserts on Postgres 14 on macOS on Intel, I've been encountering this error frequently on `UPDATES`: [code block] Looking through the PostgreSQL mailing list about this error, most posts pertain to linux kernels from the ~2010s, and don't seem applicable. I've run `VACUUM FULL` on the table a few times, as well as completely dumping the table using `pg_dump`, deleting the table and recreating. The table is ~340 GiB and there is also a 13 GiB IVFFlat index referencing one of the `vector(768)` columns. Wondering if there might be a bug in how large vectors are stored. My table, notably, contains columns of types: - `vector(768)` - `vector(768)[]` - `character varying[]` - `character varying` And each row is easily around 2 or 3 MiB.
connect ETIMEDOUT - API's are not working
Describe the bug Below attach code is working fine in curl, python but the same set of code is not working in Node JS (tried different versions) as well. Below error - Uncaught Error: connect ETIMEDOUT 64:ff9b::3498:60fc:443 at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16) at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) { errno: -4039, code: 'ETIMEDOUT', syscall: 'connect', address: '64:ff9b::3498:60fc', To Reproduce Run the program in node shell and you will get the error Code snippets [code block] OS macOS Node version Node v18.14.2 Library version openai@3.0.0 / openai@3.2.1 / openai@3.2.0
JSONSchema does not support recursive refs & zod also fails to convert similar schemas to JSONSchema correctly
Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug Copy and pasting example schemas (zod included) from the latest documents results in errors when using the latest openai sdk and having a focus on $ref for the new response_format that supports JSONSchema. zods example @ https://platform.openai.com/docs/guides/structured-outputs/ui-generation [code block] and any JSONSchema example I've tried (e.g. https://platform.openai.com/docs/guides/structured-outputs/recursive-schemas-are-supported) [code block] Initial debugging suggests that the Zod to JSONSchema conversion code looks for `definitions` and `$defs` is not considered.. but then when the resulting schema is used, it requires `$defs` π€· so the generated schema from zod is invalid). For the JSONSchema, it struggles with a lot but when I get it to pass the simple syntax validation, it falls over on the server and I cannot figure out where the issue lies based on the error. To Reproduce 1. Install latest "openai": "^4.55.0" 2. copy any of the zod or schemas that include definitions, $defs, or recursive schemas. 3. invoke the API via any of the usual ways with the new `response_format` [code block] (helps to also include `DEBUG=true` when running though nothing valuable appears to be thrown) Code snippets _No response_ OS macOS Node version bun 1.1.20 Library version 4.55.0
Nodejs v0.10.2 "Proxying http from https using two certificates"
Hey guys, I've been trying to make work "Proxying http from https using two certificates" with the last version of Nodejs currently available (v0.10.2) but I've not been able to do it. However, I had tried it before with the version 0.6.12 of Nodejs and it worked perfectly. I've seen that the problem is due to this: --- tls.js:1028 throw new Error('Missing PFX or certificate + private key.'); ^ Error: Missing PFX or certificate + private key. at Server (tls.js:1028:11) at new Server (https.js:35:14) at Object.exports.createServer (https.js:54:10) at Object.exports.createServer (/home/afuentes/node_modules/http-proxy/lib/node-http-proxy.js:178:13) at Object.<anonymous> (/home/afuentes/nodejs_test/node4_https_2cert.js:53:11) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Function.Module.runMain (module.js:497:10) --- The reason of this is that in this file (tls.js) was added (compared with the version that worked) the following line --- if (!self.pfx && (!self.cert || !self.key)) { throw new Error('Missing PFX or certificate + private key.'); } --- Therefore, it requieres the ".pfx" certificate. I have tried to solve it creating it, but It does not work yet. Could anyone help me? Has anyone tried the same? Thanks in advance.
Error in basic configuration
I'm getting the following error with a very simple app. I'm running a normal web server on port 8000. app.js [code block] console [code block]
SELECT query not using (HNSW) index
Given a table of about 1 mil. rows with columns `id` of type integer and `embedding` of type vector(2000), I ran the following query in pgAdmin query tool: [code block] Whatever the SELECT query I ran after, I'm not seeing the index being used when prefixing the query with `EXPLAIN ANALYZE`. After seeing @ankane's comments "[ordering by an expression [...] won't use the index](https://github.com/pgvector/pgvector/issues/216#issuecomment-1668244772)" and "Postgres only supports ASC order index scans on operators", I tried a simple `SELECT id, embedding <=> $1 FROM table` but it's still not using the index. Many blog articles such as this GCP one and this other one recommend to use `EXPLAIN` to check if the index is used. Thus my following questions: - How can I check that the index is actually built? With `EXPLAIN` I can see that even with the simplest query does not use the index, thus I'm even wondering if the index has actually been created (despite the `CREATE INDEX` having "returned successfully in 1 hr 6 min.") - How to use the HNSW index with a `SELECT` statement? - Assuming an index has been created, what happens if I rerun a `CREATE INDEX` statement? If I create an HNSW index, I assume it overwrites the existing index (is it?), does it overwrite if I create an `IVFFLAT` index instead? Related issues: #174, #216
files.retrieveContent only returns strings, and not bytes/binary data
Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug See this issue for the python library. I don't see an equivalent method to with_raw_response in the node sdk. To Reproduce [code block] This saves a corrupt file, though the file can be downloaded from the playground without a problem. Code snippets _No response_ OS windows 11 Node version Node.js v20.9.0 Library version "openai": "^4.16.1"
404 when making createCompletion
Describe the bug Some days ago createCompletion request started to return 404 [code block] To Reproduce 1. Run createCompletion Code snippets [code block] ``` OS macOs Node version Node v14.17.4 Library version openai v3.1.0
"Connection is closed" during quit() when commandQueue has entries
Problem `await ioredis.quit()` throws an `Error` when there are still items on the `commandQueue`. [code block] Version ioredis 5.8.0 Context We have code for a health check that connects and immediately disconnects using `await ioredis.quit()`. The issue occurs often since https://github.com/redis/ioredis/pull/2011. I assume the reason for this is, that we now have entries on the `commandQueue` immediately. Workaround The issue does not occur anymore when I set `disableClientInfo` to `true`. Reason Usually the `commandQueue` is empty and the connection terminates sucessfully. However, when the error occures, we have the following entries on the `commandQueue` [code block] The `closeHandler` flushes the queue with an error (https://github.com/redis/ioredis/blob/8dad79f9d05c8891d0c70336f484b065b9865ae2/lib/redis/event_handler.ts#L227). Thus, `quit` can only ever complete successfully if the queue is empty.
`pip install anthropic` currently fails on Python 3.13
One of the `anthropic` dependencies is not yet available for Python 3.13 - `tokenizers` needs `pyo3-ffi` and that's not on Python 3.13 yet: - https://github.com/PyO3/pyo3/issues/4554 This means nothing that runs Python 3.13 can `pip install anthropic` right now. Is the `tokenizers` dependency really necessary? I think it's only there for a VERY old count tokens feature which isn't actually that useful because it still uses the Claude 2 tokenizer: https://github.com/anthropics/anthropic-sdk-python/blob/cd80d46f7a223a5493565d155da31b898a4c6ee5/src/anthropic/_client.py#L270-L286
o3-mini in Assistants API unsupported parameter
Confirm this is a Node library issue and not an underlying OpenAI API issue - [x] This is an issue with the Node library Describe the bug On o3-mini in Assistants API I get 400 Unsupported parameter: 'temperature' is not supported with this model. Thereβs no way to remove this error as I think this is openai node npm sending the param. To Reproduce Just use o3-mini model and Assistants API. Code snippets [code block] OS macOS Node version 22.3.0 Library version 4.83.0
TypeError: Cannot read properties of undefined (reading 'prototype') when using Next.js 16 + Turbopack
Description I am encountering a `TypeError` immediately upon starting my Next.js application when `twilio` is imported. This issue appears to be related to how the library is bundled by Turbopack (Next.js bundler) in a dev environment. The error disappears if I add `twilio` to `serverExternalPackages` in `next.config.mjs`, forcing it to be excluded from the bundle. Steps to Reproduce 1. Initialize a Next.js 16 project. 2. Install `twilio`. 3. Import `twilio` in a server-side file (e.g., a service or API route). 4. Run the dev server with Turbopack: `next dev --turbo`. Code Snippet [code block] Exception/Log [code block] OS: macOS Darwin 25.1.0 (arm64) Bundler: Turbopack (`--turbo`) Workaround Adding twilio to serverExternalPackages in next.config.mjs solves the issue: [code block] Environment Node.js: v20.19.5 Next.js: v16.0.1 Twilio: v5.10.6 OS: macOS Darwin 25.1.0 (arm64) * Bundler: Turbopack (--turbo)
pg-vector not using indexes
I have these tables: [code block] [code block] `company_fact_table` has around 16 M rows `nlp_vectors.chat_gpt_company_embeddings` has about 1.3 M rows. I am using an IVFFLAT index of class `vector_cosine_ops` in the text_vector column with `lists = 1024` When I run this query: [code block] I get this query plan: [code block] Which doesn't use the IVFFLAT index. It starts to use the index when the limit is greater than 600. Because I want to return all the data of the company (which is contained in company_fact_table) I am joining with that table like this: [code block] And its giving this query plan: [code block] Which uses the index, but I get a very slow response time (I need to get results in under 5 seconds) An approach that I thought about was to change the order in which I do the order and the join. In the original query, I join the two tables and then perform the sorting. In my suggested approach, I first sort the subquery and then join the two tables. The query would look like this: [code block] Which gives me this query plan: [code block] It gives me a much better execution time, and accuracy of the results is between 99% and 100% compared to the results of the original query. Changing the LIMIT of the query above to 600 (when it starts using the index), I get this query plan: [code block] My questions are: 1. Why does it only use the index when the limit is greater than 600, given that it seems to compute all the cosine_distances? 2. Why does the JOIN slo
Wrong return type for openai.audio.speech.create API
Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug I'm following the tutorial for TTS on the OpenAI website: https://platform.openai.com/docs/guides/text-to-speech?lang=node When I try to use that code in a typescript file, typescript always infers a return value of "never". Ignoring TS for that line makes it work. <img width="852" alt="Screenshot 2023-11-10 at 4 19 18 PM" src="https://github.com/openai/openai-node/assets/35743865/32daac7c-da62-4826-b723-033c227a045a"> To Reproduce Save the code from https://platform.openai.com/docs/guides/text-to-speech?lang=node in a .ts file. Code snippets _No response_ OS macOS Node version Node v19 Library version v4.17.17