All Issues
24,993 verified issues
How to avoid CLUSTERDOWN and UNBLOCKED?
Revisiting this problem, I asked on the redis google group about it and here it is what @antirez said: [code block] Configuration-wise, I've set my cluster with "cluster-require-full-coverage" to "no" so I just need to know how to cover this on the application side
AWS Provider Coverage
AWS Provider Coverage View this spreadsheet for a near-time summary of AWS resource coverage. If there's a resource you would like to see coverage for, just add your GitHub username to next to the resource. We will use the number of community upvotes in the spreadsheet to help prioritize our efforts. https://docs.google.com/spreadsheets/d/1yJKjLaTmkWcUS3T8TLwvXC6EBwNSpuQbIq0Y7OnMXhw/edit?usp=sharing
Downloading runner update fails: "An error occurred: Access to the path is denied"
Our self-hosted runner which runs in OpenShift is stuck in an update failure loop: [code block] ... which is repeating indefinitely As far as we can tell, are the permissions are set up correctly. The current user 'runner' is part of the root group and is able to write to `_diag`, `_work`, etc. [code block] You can see the updater was able to download and extract the 2.277.1 directories but seems to have failed after that step. Logs: [code block] Any idea what path may be missing permissions? Or, could we add the path that is getting the "permission denied" error, to the error message? Thank you
Understanding HNSW + filtering
Hi, I would like to understand how the current implementation handles HNSW + filtering. Imagine you have a table: [code block] And that you even have an index on `category`: [code block] And then you want to do a query like: [code block] To do this efficiently is not straightforward -- ideally we want to do the expensive HNSW ANN on the already pruned subset (https://qdrant.tech/articles/filtrable-hnsw/). Can pgvector do this, or is there plan to enable such optimization in the future? (In this case the condition is simple enough that you might be able to use table partitioning, but that's not always the case)
Open Connections increasing
I am having two connections from nodejs to Redis: - one to set a list variable - to subscribe to a channel The longer the socket is running, the more connections I get. I have tried to set timeout values in redis, tried to close redis connections on close of socket.io [code block] So the connection in [code block] is always active and no problem whatsoever. But as time goes by the subscribed connections in [code block] increase, even if I unsubscribe on disconnect and on error. Any ideas? I have asked almost the same question on stackoverflow but got no answer as of yet. P.S.: sorry for my bad code and knowledge, this is my first nodejs ever.
Replace node-fetch with undici
Confirm this is a feature request for the Node library and not the underlying OpenAI API. - [X] This is a feature request for the Node library Describe the feature or improvement you're requesting I noticed this library is still using node-fetch because Node's native fetch is considered _experimental_. I think it'd be in the libraries best interest to switch to undici instead. Undici is the fetch implementation in Node.js. For all intents and purposes it is _stable_ (https://github.com/nodejs/undici/issues/1737). We (the maintainers of Undici) have some concerns about marking it as such in Node.js just yet because of the nature of the Fetch api spec (it itself adds breaking changes occasionally - this doesn't fit well with Node.js versioning strategy. It's complicated - read the issue I linked above for more details). Switching the undici for the shim will enable a significantly easier upgrade path in the future whenever we figure out how to mark it as properly _stable_ in Node.js Happy to help swap this out too if the maintainers approve ๐ ๐ Additional context _No response_
IVFFLAT QPS too low
I am using IVFFLAT for 1200 dimensional embeddings of vector type. I have 20 million rows. The query that is slow takes a user provided vector and finds the top 100 matching vectors. There are 4200 lists and 10 probes. The query takes 30 seconds when it's cold and 100ms if it's a repeat query. I confirmed with EXPLAIN that the index is being used. The issue is very slow IO. All of the query time is spent reading in the blocks that are buffer misses during the `Index Scan` operation. The throughput is about 6 mb/s. The hardware configuration is r5.2xlarge from AWS RDS. Here is an example `EXPLAIN (ANALYZE, BUFFERS)` result: [code block] Given that the IO should be on SSD, I am puzzled at the extremely low throughput. What am I doing wrong? Should I be using HNSW? Is it a huge performacne issue that most of my rows are stored as TOAST?
Realtime Row Level security broken for database-wide subscriptions on PG12
Bug report Describe the bug Affects - PG12 - PG14 (although these will only be new use-cases, so nobody should be experiencing the bug) Unaffected - PG13 users. If you are experiencing problems with PG13, it is probably because your Policies are not allowing access (whereas previously Policies weren't applied). This is intended behaviour - your database security is working properly Description We're discovered an issue as a result of our Row-Level Security Realtime updates for that affects any database-wide publications. (eg `CREATE PUBLICATION supabase_realtime FOR ALL TABLES;`) To Reproduce Realtime subscription may have stopped you are on PG12 and you have a Publication in your database set up like <img width="1467" alt="Screenshot 2021-12-01 at 7 49 05 PM" src="https://user-images.githubusercontent.com/10214025/144230037-9539750e-7092-40f3-9487-f3eeac582ed9.png"> Workaround Disable the `FULL` publication (red) and enable each table individually (green) <img width="1467" alt="Screenshot 2021-12-01 at 7 50 51 PM" src="https://user-images.githubusercontent.com/10214025/144229745-0747d54e-dc4e-4a5a-a286-a63e0cfdfd0b.png">
Mongoose not reconnecting when MongoDB driver does
Do you want to request a feature or report a bug? bug What is the current behavior? When connected to a Mongos container that is restarted Mongoose never reconnects. I make sure to pass these options: [code block] However the logs show me that the driver does reconnect: [code block] If the current behavior is a bug, please provide the steps to reproduce. 1 Run a nodeJS app that connects to a Mongos on a Docker container 2 Restart your container What is the expected behavior? Mongoose should be reconnecting when the container comes back online. Please mention your node.js, mongoose and MongoDB version.* nodeJS: 8.9.4 and 9.2.0 mongoose: 5.0.5 mongodb: 3.4.10 EDIT: So I'm using useDb() to connect to other tenant's databases and I just noticed that after the Mongos restart all the calls made to the parent connection db on which I used useDb() works but the calls made on the children connections don't resolve. Also the readyState is never set to 1 on the parent connection even if the connection is properly reconnected.
AnthropicVertex stream chat generation is taking too much time
Recently, i have started using AnthropicVertex instead of direct anthropic. When I try to generate some data through AnthropicVertex client, it is taking around 2s to start streaming. However, in case of direct anthropic, it is not taking this much time. Also 2s duration is random, sometime it takes quite large amount of time and goes upto 6-10s. In worse case, it goes upto 20s. So, is there any que kind of stuff? I am using same code given in vertex ai anthropic notebook to generate responses. Is there any workaround which i need to complete to get response as fast as direct anthropic? If someone could guide me on this, it would be really helpful. Thanks !!
Automatic/scalable shader warm-up
Current status: https://github.com/flutter/flutter/issues/32170#issuecomment-1016927129 ---- Although Flutter's current [shader warm-up][1] process is theoretically able to eliminate all shader compilation jank, it requires too much expertise and time to find all draw operations for such process. We shall find a more automatic and scalable way of finding those draw operations to compose a custom warm-up process. That should allow average developers without much Skia knowledge to eliminate the shader compilation on their own. That should also save Flutter engineers a lot of time as they'll no longer have to manually analyze the janky frame. Note that this issue affects iOS devices more significantly than Android devices because Android has binary shader persistent cache while iOS doesn't. We have a staged plan to roll this out, and we're actively working on it. - [x] SkSL-based shader warmup - [ ] Test-based shader warmup - [ ] In the long term, we'd like to create a solution that requires 0 extra effort from developers, and handles all cases that may have never been encountered during any tests or warmups. For example, use the CPU backend when GPU backend needs warm-up. Related issue: flutter/flutter#813, flutter/flutter#31881 [1]: https://api.flutter.dev/flutter/painting/ShaderWarmUp-class.html
Server key authentication
How can I authenticate providing a server key?
The CVE-2025-23061 advisory is incomplete and `npm audit` is wrong
Prerequisites - [x] I have written a descriptive issue title - [x] I have searched existing issues to ensure the bug has not already been reported Mongoose version 7.8.4 Node.js version 18 MongoDB server version 6 Typescript version (if applicable) 5.4 Description The commit lists as fixed versions: 8.9.5, 7.8.4, and 6.13.6 The CVE advisory however: Affected versions: < 8.9.5 / Patched versions: 8.9.5 Steps to Reproduce On a project with versions 7.8.4, or 6.13.6 do an: npm audit Expected Behavior No vulnerability reported.
WebSocket is closed before the connection is established.
Hey. I am trying to use socket.io with Chrome 17 but it cant connect in 40% of the cases and says "WebSocket is closed before the connection is established.". At times it works perfectly. I am using the latest socket.io version with the latest nodejs, on Windows both, cross domain. Any ideas? Cheers
createChatCompletion seems to ignore the abort signal
Describe the bug Sending an 'abort' signal to the `createChatCompletion` does not raise an error nor stop the completion. It makes me believe that this discussion on the openai community is true https://community.openai.com/t/cancelling-openai-apis-request/99754, but I would like to verify it isn't a bug in this library. To Reproduce Here's my code [code block] Expectation: I should see output like this, and then an error should be raised: [code block] Actual: I see output like this that never stops: [code block] Code snippets _No response_ OS macOS Node version v19.8.1 Library version openai v3.2.1
Proposal: Async Functions
Async Functions <a name="1"/>1 Async Functions This is a spec proposal for the addition of _Async Functions_ (also known as `async..await`) as a feature of TypeScript. <a name="2"/>2 Use Cases _Async Functions_ allow TypeScript developers to author functions that are expected to invoke an asynchronous operation and await its result without blocking normal execution of the program. This accomplished through the use of an ES6-compatible `Promise` implementation, and transposition of the function body into a compatible form to resume execution when the awaited asynchronous operation completes. This is based primarily on the Async Functions strawman proposal for ECMAScript, and C5.0 ยง 10.15 _Async Functions_. <a name="3"/>3 Introduction <a name="3.1"/>3.1 Syntax An _Async Function_ is a _JavaScript Function_, _Parameterized Arrow Function_, _Method_, or _Get Accessor_ that has been prefixed with the `async` modifier. This modifier informs the compiler that function body transposition is required, and that the keyword `await` should be treated as a unary expression instead of an identifier. An _Async Function_ must provide a return type annotation that points to a compatible `Promise` type. Return type inference can only be used if there is a globally defined, compatible `Promise` type. Example: [code block] <a name="3.2"/>3.2 Transformations To support this feature, the compiler needs to make certain transformations to the function body of an _Async Function_. The type of
The project name should be configurable
Ideally in `fig.yml` (how do we have general config in there? maybe we need a top level `services` key?), but maybe in a separate `.fig-project` file or something.
kubernetes-e2e-kops-aws-updown: broken test run
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-kops-aws-updown/2102/ Run so broken it didn't make JUnit output!
aws sync hangs
I am consistently seeing aws cli fail during sync to s3 with the following command: aws s3 sync --size-only --page-size 100 /mnt/ebs-volume/image/ s3://bucket-name ubuntu@ip-10-0-0-246:~/www$ aws --v aws-cli/1.11.56 Python/2.7.12 Linux/4.4.0-64-generic botocore/1.5.19 It runs well for the first gig and then hangs. This is a 50 gig filesystem: Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~4 file(s) remaining (calculating...upload: ../..//img_2630_thumb.png to s3://bucket/image.png Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~3 file(s) remaining (calculating...Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~3 file(s) remaining (calculating...Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~3 file(s) remaining (calculating...upload: ../../img_2630.png to s3://bucket/img_2630.png Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~2 file(s) remaining (calculating...Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~2 file(s) remaining (calculating...upload: ../../img_2628.png to s3://bucket/img_2628.png Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~1 file(s) remaining (calculating...Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~1 file(s) remaining (calculating...upload: ../../image/img_2628_thumb.png to s3://bucket/img_2628_thumb.png Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~0 file(s) remaining (calculating... And then it just sits there. I'm really not sure what to check at this point as the cli is not very verbose.
[BUG] npm@10 refuses to install packages from GitHub package registry
Is there an existing issue for this? - [X] I have searched the existing issues This issue exists in the latest npm version - [X] I am using the latest npm Current Behavior npm@10 is not installing packages from a private GitHub Enterprise Server registry. It fails with a 403 forbidden error when trying to fetch the package. Previous versions of npm do not encounter this issue. Expected Behavior I should be able to install packages from a private registry when my `.npmrc` is configured properly. Steps To Reproduce Remove ~/.npm to ensure we're not pulling from cache [code block] Configure `.npmrc` to use a private registry (in this case, I am using GitHub Enterprise Server 3.9.3) [code block] Install a package from the private registry: [code block] Debug log: 2023-11-25T00_29_42_285Z-debug-0.log When using npm v9, this error does not occur. For example, with Node 18/npm 9: [code block] Using npm v10 with node 18 fails with the same 403. Use npm v9 with node 21 works. [code block] This seems to indicate that something has changed with npm 10 with how it handles private registries (or at least with GitHub's registry). Environment - npm: 10.2.4 - Node.js: 20, 21 - OS Name: Linux (RHEL 8) - npm config: [code block]