All Issues
24,993 verified issues
ImportError: cannot import name 'ModelField' from 'pydantic.fields'
Hello. I have import error when I am trying to use Anthropic. [code block] [code block] [code block]
reconnect_failed gets never fired
If I stop the node server so every reconnect of the client fails, the reconnect_failed event gets never fired Here is the position in the code where reconnect_failed should get fired https://github.com/LearnBoost/socket.io-client/blob/master/lib/socket.js#L501 for some reason it never enters the else part
Vacuums extremely slow for HNSW indices?
We recently deleted a large part of a ~20 million row table (with an HNSW index size of ~31GB). Attempting to manually vacuum the table took us 10 hours (before we cancelled it) - vacuum got stuck on vacuuming the HNSW index. We tried doing a parallel vacuum and that also didn't seem to help. Our metrics showed that we weren't limited by CPU or memory at any point. Eventually we gave up, dropped the index, vacuumed the table (took <10 mins to complete), and recreated the index. Any guidance as to what we were doing wrong and/or should be doing better in future?
Invalid API Key when using claude 2.0
Windows 11 with Pythno 3.10 and used these codes . It turned out to an "Invalid API Key" error. But I'm sure the api_key is good because I could get good response via unofficial API call (from other github repository). from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT anthropic = Anthropic(api_key = "sk-ant-XXXXXX") def getResponse(prompt): msg=f"{HUMAN_PROMPT} {prompt} {AI_PROMPT}" print(msg) completion = anthropic.completions.create( model = "claude-2", max_tokens_to_sample = 30000, prompt = msg, ) res = completion.completion print(res) return res if __name__ == "__main__": getResponse("Hello, Claude") the last 3 lines of error messages: File "D:\Python310\anthropic\lib\site-packages\anthropic\_base_client.py", line 761, in _request raise self._make_status_error_from_response(request, err.response) from None anthropic.AuthenticationError: Error code: 401 - {'error': {'type': 'authentication_error', 'message': 'Invalid API Key'}} Appreciate your help. Thanks.
Socket.io disconnect with ping timeout randomly
we have a real time chat application, that uses socket.io, node.js, mongodb etc. Chat is working very fine except one issue & that is as follows Some time in between while chatting user are getting disconnected with ping timeout. As I checked there was no problem with internet connection & also there is no logs on re-connection attempts. It directly gets disconnected. OS - Ubuntu 14.04/AWS EC2 socket.io version on server - 1.6.0 Node version - v0.10.25 Please let us know what could be the problem. Also let me know if you need any other details
after upgraded ioredis from 4.10.0 to 4.11.1 Error: connect ETIMEDOUT
error: [code block] With 4.10.0 it works. How I connect: [code block] with 4.10.0 i can connect either url, with 4.11.0 i get timeout.
Twilio Verify VerificationCheck is out of date
Issue Summary Calling the following code example present in the Verification docs results in an exception- Code Snippet [code block] Exception/Log [code block] The phone is getting SMS with verification code. Moreover, its verified when checked in the logs on Twilio console, after calling the above function. But this exception makes it impossible for the application to know if the verification succeed or not. Steps to Reproduce 1. Call the verifications.create() method with required parameters to send SMS with code. 2. verificationChecks.create() method with previous phone number and correct code. Technical details: twilio-node version: 3.75.0 node version: 14.17.1
feature: including external docker-compose.yml
The Problem I'd like to use fig for acceptance testing of services In the case of a single service with a single level of dependencies this is easy (ex: a webapp with a database). As soon as the dependency tree grows to depth > 1, it gets more complicated (ex: service-a requires service-b which requires a database). Currently I'd have to specify the service dependency tree in each `fig.yml`. This is not ideal because as the tree grows, we end up with a lot of duplication, and having to update a few different `fig.yml`'s in different projects when a service adds/changes a dependency is not great. Proposed Solution Support an `include` section in the `fig.yml` which contains url/paths to other `fig.yml` files. These files would be included in a `Project` so that the top-level `fig.yaml` can refer to them using `<proejct>_<service>`. Example config [code block]
Add 'promise' return value to model save operation
Add promise return support to save operation, currently returns undefined. There are multiple return paths depending on validation and save state that would need to be updated. This would allow chaining of multiple save operations with .then() instead of nesting.
ChatCompletionStream.fromReadableStream errors due to missing finish_reason for choice
Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug When trying to use the API described here https://github.com/openai/openai-node/blob/2242688f14d5ab7dbf312d92a99fa4a7394907dc/examples/stream-to-client-browser.ts I'm getting the following an error at the following point: where the actual choices look like this: Looks like the code expects `finish_reason` to be populated but the finish details are now in a property called `finish_details`? To Reproduce Setup a server that responds with chat completion streams Then in the client try to use the `ChatCompletionStream.fromReadableStream` API, e.g.: [code block] Code snippets _No response_ OS Windows Node version 18.12.1 Library version 4.16.1
Investigate switching away from GitHub
As ESLint has continued to grow, we've started to outgrow the GitHub ecosystem. Team members spend hours each day triaging issues, many of which have incomplete information or are otherwise unclear. As such, we spend a lot of time just asking for more information ("what do you mean?", "what version are you using?", etc.). This has become increasingly frustrating for everyone on the team and ultimately takes time away from being able to contribute code to the project. Additionally, it's nearly impossible to keep up with what are the most important issues and whether or not people are following up on issues. In short, everything that Dear GitHub mentioned is severely hurting the team now. As ESLint's popularity continues to grow and there are more and more issues filed, this is quickly becoming a scalability problem. The team has discussed investigating alternatives to GitHub to see if we can find a home that is better suited to open source projects with our level of scale. We strongly feel that the code and issue tracker need to live in the same location to make it easier to manage and give people one location to visit for all of their ESLint-related needs (so simply moving to a different issue tracker and keeping the code on GitHub is not an alternative). Requirements - Must host the repo along with the related tools - Must be able to run automated tests on pull requests - Must allow contributions from anyone - Must have a way to setup issue templates prescribing what field
Support functions
Currently, IvfflatGetType and HnswGetType functions do a syscache lookup to get the datatype, and then there is a bunch of code that do things like this based on the type: [code block] That's a bit of an unusual pattern in indexes, the usual pattern would be to have support function in the opclass to encapsulate any type-specific differences. To refactor this to use support functions, the minimal change to what's in 'master' would be to define one new support function, something like 'get_vector_type()', which returns a HnswType or IvfflatType. The HnswGetType()/IvfflatGetType() function would then just call the support function. Those if-statements would remain unchanged. A more proper way to use a support function would be to have support functions like 'hnsw_get_max_dimensions' and 'hnsw_check_value', to replace the places where we currently check the type (GetMaxDimensions and HnswCheckValue). A third approach would be to have just one support function like 'hnsw_type_support' that returns a struct like: [code block] That might be more handy than having a lot of support functions, and you get better type checking from the compiler as you don't need to convert all arguments to Datums. Ivfflat has "if (type == IVFFLAT_TYPE_VECTOR) ..." kind of checks, so it would need more support function, something like: [code block]
📢 Notebook API announcements
We introduced a proposed API for Notebook but currently the API is majorly two managed object: `NotebookDocument` and `NotebookCell`. We create them for extensions and listen to their properties changes. However this doesn't follow the principal of `TextEditor/Document` where `TextDocument` is always readonly and `TextEditor` is the API for applying changes to the document. If we try to follow `TextEditor/Document`, the API can be shaped as below [code block]
Build failure on PostgreSQL 18 beta 1
Build failure: [code block]
Support for embedded documents / dbrefs
I remember there was a message about adding DBRef support via a plugin or something, is this coming anytime soon? An since it's a somewhat related issue, why there's a way to define an array of embedded documents and (as far as a I can tell) no way of embedding a single document? [code block] And the output: $ node test.js TypeError: undefined is not a function at CALL_NON_FUNCTION_AS_CONSTRUCTOR (native) at Schema.path (/home/bobry/code/mb/support/mongoose/lib/mongoose/schema.js:119:24) at Schema.add (/home/bobry/code/mb/support/mongoose/lib/mongoose/schema.js:88:12) at new Schema (/home/bobry/code/mb/support/mongoose/lib/mongoose/schema.js:26:10) at Object.<anonymous> (/home/bobry/code/mb/test.js:9:15) at Module._compile (node.js:462:23) at Module._loadScriptSync (node.js:469:10) at Module.loadSync (node.js:338:12) at Object.runMain (node.js:522:24) at Array.<anonymous> (node.js:756:12)
SSL error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
Got this error on both machines almost at the same time with docker-compose and lately with fig after rollback. A few search results points to python/openssl issue but i simple can't figure out where to dig to. Python/openssl comes from homebrew. Boot2Docker-cli version: v1.4.1 Git commit: 43241cb Client version: 1.4.1 Client API version: 1.16 Go version (client): go1.4 Git commit (client): 5bc2ff8 OS/Arch (client): darwin/amd64 Server version: 1.4.1 Server API version: 1.16 Go version (server): go1.3.3 Git commit (server): 5bc2ff8
host sub domain
This simple proxy server <pre> var httpProxy = require('http-proxy'); httpProxy.createServer(function (req, res, proxy) { var options = { host: "localhost", port: 5984 }; proxy.proxyRequest(req, res, options); }).listen(5000); </pre> if i change host:"localhost" to host:"subdomain.domainoninternet.com" if get 'Host not found' back, is this the "www . host" issue? https://github.com/nodejitsu/node-http-proxy/issues/150 on a related question, is there a way to set request headers before they are sent to destination? (to set Auth headers from the server) thank you
Why express session is very slow?
Environment vm - centos7(4cpu, 8Gb memory, gigabit NAT) I tested in the Express version 3 and 4. set as follows.. package.json [code block] A. no session api.js [code block] B. session api.js [code block] Content download speed is extremely slow. Despite of the local environment. How can I speed up?
Performance drop from 4.13.10 to 5.0.1
Do you want to request a feature or report a bug? Bug What is the current behavior? After upgrading from `4.13.10` to `5.0.1` on production environment, we noticed a real performance drop in all requests using mongoose models : ~200ms per response. We didn't release any code change apart from the `mongoose` dependency upgrade. And we don't use any particular feature of mongoose apart from models & schemas. After applying a fix to revert to `4.13.10`, everything went back to normal. Deploying with `5.0.1` : Reverting back to `4.13.10` : If the current behavior is a bug, please provide the steps to reproduce. More of a performance issue than a bug, so I'm not sure I will be able to reproduce it outside of a production environment. If someone knows any way to benchmark this I will be happy to try 😊 What is the expected behavior? Having response times around ~20ms for 150 RPS. Please mention your node.js, mongoose and MongoDB version.* node.js 8.9.4, mongoose 5.0.1, MongoDB 3.4.10. If it's of any use, we are on a cloud managed (Atlas) sharded cluster.
couldn't find DSO to load: libhermes.so caused by
Description [code block] Version 0.71.0 Output of `npx react-native info` System: OS: macOS 13.1 CPU: (10) arm64 Apple M1 Pro Memory: 70.36 MB / 16.00 GB Shell: 5.8.1 - /bin/zsh Binaries: Node: 16.16.0 - ~/.nvm/versions/node/v16.16.0/bin/node Yarn: 1.22.19 - /opt/homebrew/bin/yarn npm: 8.11.0 - ~/.nvm/versions/node/v16.16.0/bin/npm Watchman: Not Found Managers: CocoaPods: 1.11.3 - /opt/homebrew/lib/ruby/gems/2.7.0/bin/pod SDKs: iOS SDK: Platforms: DriverKit 22.2, iOS 16.2, macOS 13.1, tvOS 16.1, watchOS 9.1 Android SDK: Not Found IDEs: Android Studio: 2021.3 AI-213.7172.25.2113.9123335 Xcode: 14.2/14C18 - /usr/bin/xcodebuild Languages: Java: 17.0.4 - /usr/bin/javac npmPackages: @react-native-community/cli: Not Found react: 18.2.0 => 18.2.0 react-native: 0.71.0 => 0.71.0 react-native-macos: Not Found npmGlobalPackages: react-native: Not Found Steps to reproduce When building towards Android no issues on IOS. Snack, code example, screenshot, or link to a repository When building towards Android no issues on IOS.