All Issues
24,993 verified issues
Realtime Google Speech Transcription
I tried Twilio Speech recognition and was not so happy with the accuracy of conversion to text. I wanted to use Google Speech API for transcription and I was following this article: https://medium.com/@mheavers/better-twilio-transcriptions-with-the-google-web-speech-api-eb24274c5e3 There, they record the speech and then send the details to Google Speech API. Is there any way to do it realtime without hanging up the call. Something like a replacement for Twilio Speech Recognition. Thanks in advance.
pgvector vs FAISS
update: Upgrading to v0.1.1 and building with `PG_CFLAGS=-ffast-math make` reduced the query time to 2.2s! Big speed jump, but 1.7x slower than the FAISS / Python service. ----- I imported 792010 rows of 512d image vectors (~5GB) (aka not random) and ran a tests[0] to find the 4 closests vectors to an exact vector in the dataset. Searching with: - 1.279357709s - FAISS python web service (using json and IndexFlatL2) (with 791963 vectors [2]). - 11.381s - Searching (l2_distance) with pgvector extension (with 792010 rows) . Hardware: [code block] Importing took 11.381 seconds with the `COPY` cmd from a csv file with each row being the vector. Any ideas why pgvector would be so much slower? The testing ENVs between the tools was significantly different, to the FAISS's dis-advantage, but FAISS was still much quicker. [1] Not a "scientific" test. I had other programs running on the machine when running this test. Mileage may vary. [2] The slight difference is the fais's vector import filters duplicate vectors.
Empty args/inputs when turning streaming on and setting tool choice to any
Description When calling the Anthropic client with `streaming=True` and `tool_choice={"type": "any"/"tool" }` the output returns a tool call but with empty args. This is problematic for a few other reasons beyond no args being returned. For example, quite a few packages rely on the `anthropic-sdk`, one of which is `langchain-anthropic` (ref). Expected response I would expect that the output includes the inputs/args required for the tool call when streaming. Reproduction steps I've added a notebook to highlight some things: https://gist.github.com/kwnath/f42737c023767d5effdcca20cb5bd0a6
TypeError: getDefaultAgent is not a function
Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug When using openai-node with langchain and datadog, ESM, I get the following error: [code block] To Reproduce [code block] Code snippets _No response_ OS macOS Node version node 20, node 22 Library version 4.52.0
Manhattan distance
I was wondering if the Manhattan distance could be added to this extension? I'm asking because if any maintainer think it's easy to add and is willing to do it, then I won't bother looking at how to add this myself (since the Euclidean distance is already implemented, I'm guessing that it might be almost trivial to add). Also, FYI, with 10 or more dimensions, L1 is more suited than L2... the more dimensions you add, the less differences there is in relative distances when using L2. After some radius, the points tend to have a similar distance with increasing dimensions.
High LWLock Contention During Concurrent HNSW Index Scans
Iโve been running into LWLock contention issues with HNSW indexes during concurrent workloads, and I wanted to see if anyone has insights or suggestions for improving this. The problem becomes noticeable when we have 32+ DB connections. At this level, the database load is dominated by LWLock:LockManager wait events. At lower concurrency levels, the QPS scales pretty well, and thereโs minimal lock contention. But as we increase the number of connections, contention spikes, and throughput saturates. Observations Lock Behavior - Based on the code in hnswscan.c, it looks like the search process uses `LockPage(..., HNSW_SCAN_LOCK, ShareLock)` to protect access to the adjacency graph during traversal. These locks are brief but can pile up when multiple queries hit the same graph structures. Scaling Issue- With ~32 workers (1 connection per worker), QPS is great and lock contention is low. When we push to 32+ to 100 or more workers, contention grows exponentially, leading to LWLock:LockManager dominating database load. What Iโve Tried: Concurrency Tuning: Sticking to ~32 workers seems to work best, but weโd like to scale further if possible. Instance Scaling: Larger instances donโt seem to help much because the bottleneck is lock contention, not compute or I/O. Yet to try: Having prepared statements. Ideas: 1. Finer grained Locking: Is there a way to reduce the granularity of the HNSW_SCAN_LOCK to avoid so much contention when multiple queries traverse the graph? 2. Asynchrono
routing broken with express server
I have following setup $ node -v v0.6.11 $ npm list blah/blah/blee/nodex โโโฌ express@2.5.8 โ โโโฌ connect@1.8.5 โ โ โโโ formidable@1.0.9 โ โโโ mime@1.2.4 โ โโโ mkdirp@0.3.0 โ โโโ qs@0.4.2 โโโฌ http-proxy@0.8.0 โ โโโ colors@0.6.0-1 โ โโโฌ optimist@0.2.8 โ โ โโโ wordwrap@0.0.2 โ โโโ pkginfo@0.2.3 โโโฌ socket.io@0.9.1-1 โโโ policyfile@0.0.4 โโโ redis@0.6.7 โโโฌ socket.io-client@0.9.1-1 โโโฌ active-x-obfuscator@0.0.1 โ โโโ zeparser@0.0.5 โโโ uglify-js@1.2.5 โโโฌ ws@0.4.8 โ โโโ commander@0.5.0 โ โโโ options@0.0.2 โโโ xmlhttprequest@1.2.2 Issue: I have this simple express server [code block] And following simple proxy.js [code block] Just doesnt works! $ curl -XGET http://localhost:8080/ -v ... ok! ... $ curl -XGET http://localhost:8081/ -v blah blah... NOT FOUND ..blah bleh ..
Cluster UnhandledPromiseRejectionWarning
I'm intentionally causing a connection error (by omitting a NAT mapping for one cluster node) to verify my application's error handling, but I'm seeing an UnhandledPromiseRejectionWarning: [code block] My code is executing a multi command like this: [code block] I believe this is the relevant debug output preceding the throw: [code block] The promise for `exec` does reject as expected (json taken from my log): [code block] Is there something I can do to catch the rejection? Possibly related, the `'error'` event listener I have attached to the cluster does not trigger when this happens, which also seems unexpected. I was using `4.9.0` and tried updating to `4.9.5`, and have the same experience with both versions. TIA for any guidance!
Issue with ReactNative iOS
Describe the bug Using v4 on ReactNative I am getting 404 from CloudFlare on ReactNative iOS. Same code works in standard node. Haven't checked with v3 yet. Other external requests work fine. Same code in both environments, code copied from sample. Log outputs compared: [code block] To Reproduce [code block] Code snippets _No response_ OS macOS Node version Node v16.x Library version v4b4
my HTTPS Proxy Server doesn't work!
Hi, I have simple problem with https proxy server! I make simple https proxy server (example files) and set my ip/port as proxy server in my browsers (HTTPS proxy server in Safari & SSL proxy in Firefox) but I could not receive any request when I try to open a secure website. I don't have problem with http servers and I can't handle incoming requests but in HTTP(S) websites, it doesn't work for me! simple flow is: 1) running https server on localhost:9000 2) setting http & https proxy in browser 3) no log in https server! when I want open a secure website in browser (like https://google.com) thanks
error handling
error handling
stripe__WEBPACK_IMPORTED_MODULE_0__.default is not a constructor
Describe the bug I receive an error when importing stripe: I'm using next 13 with app dir and latest version of stripe. To Reproduce 1. Create a next 13 app. 2. Create a `stripe.ts` file to initialize the library. 3. Try to use it in a server route. Expected behavior The library should return the expected Stripe object since it's not marking any apparent error. Code snippets _No response_ OS Windows 11 Node version Node v18.16.0 Library version "stripe": "12.14.0" API version 2022-11-15 Additional context _No response_
How to achieve an effect similar to uploading file attachments on the web page in the API?
I now have some plain text files to process, but I found that if I upload attachments on the web page, the response I get is significantly better than the results obtained by using the API. I would like to know if there is any way to make the results of the API look like the results of the web page? The format of the prompt I send in the API is roughly like this: [code block] What I send on the web page is this: "{Introduction}", and then upload the file as an attachment. My understanding is that there should be a fixed format on the web page to connect the file content and my prompt to get better results. Can I get this splicing format? If you need the text of the {introduction} part or specific text files involved here, please contact me, thank you very much! This problem has bothered me for several days, I am looking forward to your reply :)
maxNetworkRetries setting does not work
environment: node v10.x on AWS Lambda package version `"stripe": "^8.84.0"` we have the following code in our checkout lambda function, as suggested in official docs https://www.npmjs.com/package/stripe#network-retries [code block] in one of executions it thrown exception [code block] we expected request would be retried at least 3 times, but it did not, cause Lambda function duration was ~5000ms <img width="969" alt="_scr_ 2020-10-28 at 14 34 44" src="https://user-images.githubusercontent.com/155563/97489568-15d2d880-1936-11eb-95f3-344a0db83723.png">
redis server่ฟๆฅไธไธๅฆไฝ้่ฏฏๅค็
่ฏท้ฎไธๅฆๆredisๆๅกๅจ่ฟๆฅไธไธ๏ผioredis้ๅฏน่ฟ็งerrorๅฆไฝ่ทๅ๏ผ
Cluster: problem when using Slaves for READ operations
I'm using Redis in Cluster mode with ioredis. I discovered the `{ readOnly: true }` option which looks amazing. However, I've got some new MOVED errors since using this option. It is only happening on write operations (`del`, `hincrby`, etc.). I looked at the code and this piece of code (in cluster.js) attracted my attention: [code block] My guess: when the option readOnly is activated, it selects randomly one of the nodes that is reponsible of the targetSlot. In a setup where you've got a Slave for each Master, the selection is made between two nodes (one Master and its Slave). It means that the Read operations are randomly distributed between these two nodes, which is the objective (even though I originally thought it would only go on Slave nodes). However, it seems that the same process is applied for Write operations. It creates an issue because if the Slave node is selected for this operation, it will return a MOVED error. Eventually, with a few retries there is a good chance that the Master node will be picked by the random selector. So the operation has a good chance of succeeding. However, it looks very inefficient and with a lot of operations you'd get some operations actually failing. Please let me know if it's a bug or if I'm not using the library properly
Better Access to Response Headers
On a successful API request the Stripe Node SDK doesn't provide a natural way of accessing response headers which can be useful in the case you're interested in response metadata (such as `Idempotency-Key` and `Request-ID`. For example today it is possible to do this but it is a little bit awkward since response metadata is exposed through an event emitter interface, while the response body is exposed through a callback interface: [code block] It would be a much better experience if this data was passed in the callback like so: [code block] My idea is that we add headers, statusCode and known Stripe headers such as `idempotencyKey` to go along with `requestId` to StripeResource here Thoughts?
Unhandled Rejection in Twilio SDK When Catching Exceptions in Application
Issue Summary Twilio SDK has unhandled rejection in SDK and can't catch it from the application. This issue causes application termination. Steps to Reproduce 1. See the code snippet Code Snippet [code block] Exception/Log [code block] Technical details: twilio-node version: 4.14.0 node version: 18.14.1
res.on('end', callback) not working
My code: [code block]
ssubscribe within a cluster is not working properly
Hi! It seems there is an issue with the spublish()/ssubscribe() methods added in https://github.com/luin/ioredis/commit/6285e80ffb47564dc01d8e9940ff9a103bf70e2d: [code block] The `smessage` event is not received. It works with classic publish()/subscribe() though: [code block] My `docker-compose.yml`, for reproducibility: [code block] The test case here looks a bit weird, shouldn't it be something like: [code block] Thanks in advance!