Verified Fixes
60 issues with community-verified solutions (trust score β₯ 70)
consider not deprecating typeInfo and getFieldDef options to validate function and TypeInfo constructor
https://github.com/graphql/graphql-js/blob/e1726dfea66979bfe7ad1c0b0834613e4b6ce4b4/src/validation/validate.ts#L44-L45 https://github.com/graphql/graphql-js/blob/e1726dfea66979bfe7ad1c0b0834613e4b6ce4b4/src/utilities/TypeInfo.ts#L66-L67 From what I understand, by preserving these options, we add the ability for a server to add custom meta fields such as `__fulfilled` or `__directive` and still validate properly.
[Community contributions] Model cards
Hey friends! π We are currently in the process of improving the Transformers model cards by making them more directly useful for everyone. The main goal is to: 1. Standardize all model cards with a consistent format so users know what to expect when moving between different model cards or trying to learn how to use a new model. 2. Include a brief description of the model (what makes it unique/different) written in a way that's accessible to everyone. 3. Provide ready to use code examples featuring the `Pipeline`, `AutoModel`, and `transformers-cli` with available optimizations included. For large models, provide a quantization example so its easier for everyone to run the model. 4. Include an attention mask visualizer for currently supported models to help users visualize what a model is seeing (refer to #36630) for more details. Compare the before and after model cards below: With so many models in Transformers, we could really use some a hand with standardizing the existing model cards. If you're interested in making a contribution, pick a model from the list below and then you can get started! Steps Each model card should follow the format below. You can copy the text exactly as it is! [code block]py from transformers.utils.attention_visualizer import AttentionMaskVisualizer visualizer = AttentionMaskVisualizer("google/gemma-3-4b-it") visualizer("<img>What is shown in this image?") \[code block]py <insert relevant code snippet here related to the note if its
"lib/IsolatedGPT35TurboMutation/deleteFineTuneModel: AbortController is not defined"
Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug What is this error? It works fine locally, but when deploying I get this error..: "lib/IsolatedGPT35TurboMutation/deleteFineTuneModel: AbortController is not defined" To Reproduce if (job.model_id) { if (!src.openAiKey) throw new Error('OpenAiKey not found'); const openai = new OpenAI({ apiKey: src.openAiKey }); const model = await openai.models.del(job.model_id); Code snippets _No response_ OS macOS Node version Node 18 Library version openai 4.0.8
Internal Server Error when sending multiple PDFs in conversation
Description When attempting to have a conversation that includes multiple PDF documents with Claude, the API returns a 500 Internal Server Error. The error occurs specifically when trying to send a second PDF after receiving a response for the first one. Steps to Reproduce 1. Initialize a conversation with Claude API 2. Send first message containing: - PDF document (base64 encoded) - Text prompt ("1μ€ μμ½") 3. Receive assistant's response 4. Send second message with: - Different PDF document (base64 encoded) - Text prompt ("1μ€μμ½") 5. API returns 500 Internal Server Error Code Example [code block] Error Message [code block] Environment - Python SDK Version: (version number) - Claude Model: claude-3-5-sonnet-20241022 - Beta Features: ["pdfs-2024-09-25"] Expected Behavior The API should handle multiple PDF documents in a conversation, allowing for sequential analysis of different documents. Additional Context - The first PDF upload and response works correctly - The error occurs specifically when trying to send a second PDF in the conversation - Using the latest beta PDF feature as indicated in the `betas` parameter Questions 1. Is there a limitation on the number of PDFs that can be processed in a single conversation? 2. Is there a specific way to handle multiple PDF documents in a conversation that I'm missing? 3. Could this be related to the beta status of the PDF feature?
VS Code fails to start/open when offline
- VSCode Version: 1.2.0 - OS Version: Windows 8 Steps to Reproduce: 1. Install VS Code 1.2.0 and open first time while connected to internet 2. Turn off data connection 3. Try to restart VS Code (it does not open at all, either from shortcuts or exe)
[ESLint] Feedback for 'exhaustive-deps' lint rule
Common Answers π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘ We analyzed the comments on this post to provide some guidance: https://github.com/facebook/react/issues/14920#issuecomment-471070149. π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘π‘ ---- What is this This is a new ESLint rule that verifies the list of dependencies for Hooks like `useEffect` and similar, protecting against the stale closure pitfalls. For most cases it has an autofix. We'll add more documentation over the next weeks. Installation [code block] ESLint config: [code block] Simple test case to verify the rule works: [code block] The lint rule complains but my code is fine! If this new `react-hooks/exhaustive-deps` lint rule fires for you but you think your code is correct, please post in this issue. ---- BEFORE YOU POST A COMMENT Please include these three things: 1. A CodeSandbox demonstrating a minimal code example that still expresses your intent (not "foo bar" but actual UI pattern you're implementing). 2. An explanation of the steps a user does and what you expect to see on the screen. 3. An explanation of the intended API of your Hook/component. But my case is simple, I don't want to include those things! It might be simple to you β but itβs not at all simple to us. If your comment doesn't include either of them (e.g. no CodeSandbox link), we will hide your comment because itβs very hard to track the discussion otherwise. Thank you for respecting everyoneβs time by including them.
Add `trpc.useContext().setQueryData()` callback
When updating values in the query cache which rely upon the old value, an updater callback comes handy, e.g.: [code block] Instead of: [code block] This overload should be made available through `trpc.useContext().setQueryData`.
xhr.abort on disconnect not sending connection close to server on beforeunload event
Is any body else running into this issue? Here is the scenario: 1. We are running node/socket.io on a separate server with its own DNS/domainname (Don't ask why) and our application server is on a different host/DNS/domainname. 2. The socket.io.js file is rendered from the node server. 3. When a user navigates away from a page we would like to send a disconnect event to the server from client. (Should be pretty easy right?) 4. Upon debugging we found that on line 3436 in socket.io.js an xhr.abort command is issued which should tell the server to initiate a disconnect event. (But no dice). 5. The disconnect finally fires but not from the unload event, but from the connection timeout. 6. Seems pretty simple, anyone else having these issues? Thanks.
Add support for caching
Support for caching would be an awesome addition. Something along the lines of nginx location ~\* .(jpg|png|gif|jpeg|css|js|mp3|wav|swf|mov|doc|pdf|xls|ppt|docx|pptx|xlsx)$ { proxy_buffering on; proxy_cache_valid 200 120m; expires 864000; }
AsyncQueue is already failed: The transaction was aborted, so the request cannot be fulfilled.
Describe your environment Operating System version: Android, so the version varies Browser version: Usually latest Chrome (73.0.3683.90) Firebase SDK version: 5.9.3 Firebase Product: Firestore Describe the problem We're seeing the following error more and more lately (via our error reporting service): [code block] This error is of a critical nature, as it entirely breaks the DB layer. ~~Data stops getting written to the database, and it can be somewhat difficult to detect when the DB gets into this state, as the error is only thrown once. You can then call all the DB functions, and everything appears to work as normal, except no data ends up in the cloud (nor the persistence layer).~~ (This was incorrect. Our error collection service just aggregated the data) Steps to reproduce: Apologies, but it's entirely periodic. It could be related to IndexedDB reaching a certain size, as clearing it out seems to postpone the problem for a few weeks. I have no theories as to what causes the "transaction to be aborted". We were hoping the new garbage collection feature would take care of the problem, but so far it doesn't seem like it. The devices running this are of the model "Samsung T580". Any other info you need, let me know.
kubernetes-e2e-gci-gke-multizone: broken test run
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-multizone/1128/ Run so broken it didn't make JUnit output!
ci-kubernetes-e2e-gci-gke-serial: broken test run
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial/93/ Run so broken it didn't make JUnit output!
Changing headers should be an option
changing the headers "location" & "origin" when proxying websocket request should be an option, because this is important to make the proxying behave like "not happened".
npm ERR! cb() never called!
I want to install create-react-app but bump along with this bug, i have tried many solutions/articles and in stack overflow like, cache verify, cache cleaning, and many stuffs but still making this error, any help for this problem ? 2020-10-07T11_11_07_015Z-debug.log node -v v12.19.0 npm -v 6.14.8
kubernetes-e2e-gci-gke-flaky: broken test run
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-flaky/31/ Multiple broken tests: Failed: [k8s.io] NodeOutOfDisk [Serial] [Flaky] [Disruptive] runs out of disk space {Kubernetes e2e suite} [code block] Failed: [k8s.io] PersistentVolumes create a PV and a pre-bound PVC: test write access [Flaky] {Kubernetes e2e suite} [code block] Failed: [k8s.io] PersistentVolumes create a PVC and a pre-bound PV: test write access [Flaky] {Kubernetes e2e suite} [code block] Failed: Test {e2e.go} [code block] Issues about this test specifically: #33361 Failed: [k8s.io] PersistentVolumes create a PVC and non-pre-bound PV: test write access [Flaky] {Kubernetes e2e suite} [code block] Failed: [k8s.io] PersistentVolumes should create a non-pre-bound PV and PVC: test write access [Flaky] {Kubernetes e2e suite} [code block]
Support Hybi10 binaryType
Just landed in Chromium: http://code.google.com/p/chromium/issues/detail?id=93652 Allows sending Blob or ArrayBuffer (binary data).
ci-kubernetes-e2e-gci-gke-reboot: broken test run
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot/346/ Multiple broken tests: Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards {Kubernetes e2e suite} [code block] Issues about this test specifically: #33405 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on {Kubernetes e2e suite} [code block] Issues about this test specifically: #33407 #33623 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart {Kubernetes e2e suite} [code block] Issues about this test specifically: #33874 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart {Kubernetes e2e suite} [code block] Issues about this test specifically: #33882 #35316 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards {Kubernetes e2e suite} [code block] Issues about this test specifically: #33703 #36230 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart {Kubernetes e2e suite} [code block] Issues about this test specifically: #34123 #35398 Previous issues for this suite: #37
count parameter needs to be able to interpolate variables from modules.
I've been trying to extend my Terraform examples to support multiple AZs by default. As such, I have an _az_count_ variable output by my module to detect the AZs you have available: https://github.com/terraform-community-modules/tf_aws_availability_zones#outputs And I then want to reuse it for instances, for example here: https://github.com/bobtfish/terraform-vpc/blob/master/main.tf#L37 My example (https://github.com/bobtfish/terraform-example-vpc/tree/master/eucentral1-demo) crashes out with the error: - aws_instance.nat: resource count can't reference module variable: module.vpc.az_count and if I remove this error from the source (hope springs eternal!), I run into: - strconv.ParseInt: parsing "${module.azs.az_count}": invalid syntax This inability to interpolate count variables is a blocker for me being able to write region independent modules - as (for example) I want to be able to allocate one subnet per AZ, writing code like: [code block] Even better, I'd like to be able to interpolate variables out of one module, and into the user variables of another, for example: [code block]
Pass variables inside fig.yml on runtime
Is it possible to configure a fig.yml in such way that you can pass on execution time variables instead of hard-wiring them inside the fig.yml? I think a generic variable injection during execution would be quite useful for various use cases. e.g: <pre> jenkins: image: aespinosa/jenkins:latest ports: - "8080" hostname: ${HOSTNAME} </pre> HOSTNAME=ci fig up That could inject the variable HOSTNAME inside the fig.yml during execution and execute a docker run with hostname `ci`. ps. This is different than passing environment variables inside docker which is already supported (http://www.fig.sh/yml.html#environment)
Can't build NextJS project with openai library. Getting: Type error: Private identifiers are only available when targeting ECMAScript 2015 and higher.
Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug Getting this kind of error on build time on NextJS 14 and I don't know why this is my tsconfig.json [code block] Thanks for help. To Reproduce 1) Install and use library on nextjs 2) import something like `import typoe { Message } from 'openai/resources/beta/threads/messages';` Code snippets _No response_ OS macOS Node version v22.3.0 Library version 4.52.3
Dial in Twillio functions keeps calling multiple times
Steps to Reproduce Using the following [code block]` Both numbers get called as expected. The issue is that if I cancel a call on first phone, it starts ringing again until i cancel the call on both or accept on one or timeout happens. Is there a way to prevent that? The statusCallback function is the following, just used to log some extra stuff. [code block]`
io configure - socket io v1.X
What is the example configuration for do this in socket io v1.X.X ? [code block]
ImportError: cannot import name 'ModelField' from 'pydantic.fields'
Hello. I have import error when I am trying to use Anthropic. [code block] [code block] [code block]
reconnect_failed gets never fired
If I stop the node server so every reconnect of the client fails, the reconnect_failed event gets never fired Here is the position in the code where reconnect_failed should get fired https://github.com/LearnBoost/socket.io-client/blob/master/lib/socket.js#L501 for some reason it never enters the else part
Vacuums extremely slow for HNSW indices?
We recently deleted a large part of a ~20 million row table (with an HNSW index size of ~31GB). Attempting to manually vacuum the table took us 10 hours (before we cancelled it) - vacuum got stuck on vacuuming the HNSW index. We tried doing a parallel vacuum and that also didn't seem to help. Our metrics showed that we weren't limited by CPU or memory at any point. Eventually we gave up, dropped the index, vacuumed the table (took <10 mins to complete), and recreated the index. Any guidance as to what we were doing wrong and/or should be doing better in future?
Invalid API Key when using claude 2.0
Windows 11 with Pythno 3.10 and used these codes . It turned out to an "Invalid API Key" error. But I'm sure the api_key is good because I could get good response via unofficial API call (from other github repository). from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT anthropic = Anthropic(api_key = "sk-ant-XXXXXX") def getResponse(prompt): msg=f"{HUMAN_PROMPT} {prompt} {AI_PROMPT}" print(msg) completion = anthropic.completions.create( model = "claude-2", max_tokens_to_sample = 30000, prompt = msg, ) res = completion.completion print(res) return res if __name__ == "__main__": getResponse("Hello, Claude") the last 3 lines of error messages: File "D:\Python310\anthropic\lib\site-packages\anthropic\_base_client.py", line 761, in _request raise self._make_status_error_from_response(request, err.response) from None anthropic.AuthenticationError: Error code: 401 - {'error': {'type': 'authentication_error', 'message': 'Invalid API Key'}} Appreciate your help. Thanks.
Socket.io disconnect with ping timeout randomly
we have a real time chat application, that uses socket.io, node.js, mongodb etc. Chat is working very fine except one issue & that is as follows Some time in between while chatting user are getting disconnected with ping timeout. As I checked there was no problem with internet connection & also there is no logs on re-connection attempts. It directly gets disconnected. OS - Ubuntu 14.04/AWS EC2 socket.io version on server - 1.6.0 Node version - v0.10.25 Please let us know what could be the problem. Also let me know if you need any other details
after upgraded ioredis from 4.10.0 to 4.11.1 Error: connect ETIMEDOUT
error: [code block] With 4.10.0 it works. How I connect: [code block] with 4.10.0 i can connect either url, with 4.11.0 i get timeout.
Twilio Verify VerificationCheck is out of date
Issue Summary Calling the following code example present in the Verification docs results in an exception- Code Snippet [code block] Exception/Log [code block] The phone is getting SMS with verification code. Moreover, its verified when checked in the logs on Twilio console, after calling the above function. But this exception makes it impossible for the application to know if the verification succeed or not. Steps to Reproduce 1. Call the verifications.create() method with required parameters to send SMS with code. 2. verificationChecks.create() method with previous phone number and correct code. Technical details: twilio-node version: 3.75.0 node version: 14.17.1
feature: including external docker-compose.yml
The Problem I'd like to use fig for acceptance testing of services In the case of a single service with a single level of dependencies this is easy (ex: a webapp with a database). As soon as the dependency tree grows to depth > 1, it gets more complicated (ex: service-a requires service-b which requires a database). Currently I'd have to specify the service dependency tree in each `fig.yml`. This is not ideal because as the tree grows, we end up with a lot of duplication, and having to update a few different `fig.yml`'s in different projects when a service adds/changes a dependency is not great. Proposed Solution Support an `include` section in the `fig.yml` which contains url/paths to other `fig.yml` files. These files would be included in a `Project` so that the top-level `fig.yaml` can refer to them using `<proejct>_<service>`. Example config [code block]
Add 'promise' return value to model save operation
Add promise return support to save operation, currently returns undefined. There are multiple return paths depending on validation and save state that would need to be updated. This would allow chaining of multiple save operations with .then() instead of nesting.
ChatCompletionStream.fromReadableStream errors due to missing finish_reason for choice
Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug When trying to use the API described here https://github.com/openai/openai-node/blob/2242688f14d5ab7dbf312d92a99fa4a7394907dc/examples/stream-to-client-browser.ts I'm getting the following an error at the following point: where the actual choices look like this: Looks like the code expects `finish_reason` to be populated but the finish details are now in a property called `finish_details`? To Reproduce Setup a server that responds with chat completion streams Then in the client try to use the `ChatCompletionStream.fromReadableStream` API, e.g.: [code block] Code snippets _No response_ OS Windows Node version 18.12.1 Library version 4.16.1
Investigate switching away from GitHub
As ESLint has continued to grow, we've started to outgrow the GitHub ecosystem. Team members spend hours each day triaging issues, many of which have incomplete information or are otherwise unclear. As such, we spend a lot of time just asking for more information ("what do you mean?", "what version are you using?", etc.). This has become increasingly frustrating for everyone on the team and ultimately takes time away from being able to contribute code to the project. Additionally, it's nearly impossible to keep up with what are the most important issues and whether or not people are following up on issues. In short, everything that Dear GitHub mentioned is severely hurting the team now. As ESLint's popularity continues to grow and there are more and more issues filed, this is quickly becoming a scalability problem. The team has discussed investigating alternatives to GitHub to see if we can find a home that is better suited to open source projects with our level of scale. We strongly feel that the code and issue tracker need to live in the same location to make it easier to manage and give people one location to visit for all of their ESLint-related needs (so simply moving to a different issue tracker and keeping the code on GitHub is not an alternative). Requirements - Must host the repo along with the related tools - Must be able to run automated tests on pull requests - Must allow contributions from anyone - Must have a way to setup issue templates prescribing what field
Support functions
Currently, IvfflatGetType and HnswGetType functions do a syscache lookup to get the datatype, and then there is a bunch of code that do things like this based on the type: [code block] That's a bit of an unusual pattern in indexes, the usual pattern would be to have support function in the opclass to encapsulate any type-specific differences. To refactor this to use support functions, the minimal change to what's in 'master' would be to define one new support function, something like 'get_vector_type()', which returns a HnswType or IvfflatType. The HnswGetType()/IvfflatGetType() function would then just call the support function. Those if-statements would remain unchanged. A more proper way to use a support function would be to have support functions like 'hnsw_get_max_dimensions' and 'hnsw_check_value', to replace the places where we currently check the type (GetMaxDimensions and HnswCheckValue). A third approach would be to have just one support function like 'hnsw_type_support' that returns a struct like: [code block] That might be more handy than having a lot of support functions, and you get better type checking from the compiler as you don't need to convert all arguments to Datums. Ivfflat has "if (type == IVFFLAT_TYPE_VECTOR) ..." kind of checks, so it would need more support function, something like: [code block]
π’ Notebook API announcements
We introduced a proposed API for Notebook but currently the API is majorly two managed object: `NotebookDocument` and `NotebookCell`. We create them for extensions and listen to their properties changes. However this doesn't follow the principal of `TextEditor/Document` where `TextDocument` is always readonly and `TextEditor` is the API for applying changes to the document. If we try to follow `TextEditor/Document`, the API can be shaped as below [code block]
Build failure on PostgreSQL 18 beta 1
Build failure: [code block]
Support for embedded documents / dbrefs
I remember there was a message about adding DBRef support via a plugin or something, is this coming anytime soon? An since it's a somewhat related issue, why there's a way to define an array of embedded documents and (as far as a I can tell) no way of embedding a single document? [code block] And the output: $ node test.js TypeError: undefined is not a function at CALL_NON_FUNCTION_AS_CONSTRUCTOR (native) at Schema.path (/home/bobry/code/mb/support/mongoose/lib/mongoose/schema.js:119:24) at Schema.add (/home/bobry/code/mb/support/mongoose/lib/mongoose/schema.js:88:12) at new Schema (/home/bobry/code/mb/support/mongoose/lib/mongoose/schema.js:26:10) at Object.<anonymous> (/home/bobry/code/mb/test.js:9:15) at Module._compile (node.js:462:23) at Module._loadScriptSync (node.js:469:10) at Module.loadSync (node.js:338:12) at Object.runMain (node.js:522:24) at Array.<anonymous> (node.js:756:12)
SSL error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
Got this error on both machines almost at the same time with docker-compose and lately with fig after rollback. A few search results points to python/openssl issue but i simple can't figure out where to dig to. Python/openssl comes from homebrew. Boot2Docker-cli version: v1.4.1 Git commit: 43241cb Client version: 1.4.1 Client API version: 1.16 Go version (client): go1.4 Git commit (client): 5bc2ff8 OS/Arch (client): darwin/amd64 Server version: 1.4.1 Server API version: 1.16 Go version (server): go1.3.3 Git commit (server): 5bc2ff8
host sub domain
This simple proxy server <pre> var httpProxy = require('http-proxy'); httpProxy.createServer(function (req, res, proxy) { var options = { host: "localhost", port: 5984 }; proxy.proxyRequest(req, res, options); }).listen(5000); </pre> if i change host:"localhost" to host:"subdomain.domainoninternet.com" if get 'Host not found' back, is this the "www . host" issue? https://github.com/nodejitsu/node-http-proxy/issues/150 on a related question, is there a way to set request headers before they are sent to destination? (to set Auth headers from the server) thank you
Why express session is very slow?
Environment vm - centos7(4cpu, 8Gb memory, gigabit NAT) I tested in the Express version 3 and 4. set as follows.. package.json [code block] A. no session api.js [code block] B. session api.js [code block] Content download speed is extremely slow. Despite of the local environment. How can I speed up?
Performance drop from 4.13.10 to 5.0.1
Do you want to request a feature or report a bug? Bug What is the current behavior? After upgrading from `4.13.10` to `5.0.1` on production environment, we noticed a real performance drop in all requests using mongoose models : ~200ms per response. We didn't release any code change apart from the `mongoose` dependency upgrade. And we don't use any particular feature of mongoose apart from models & schemas. After applying a fix to revert to `4.13.10`, everything went back to normal. Deploying with `5.0.1` : Reverting back to `4.13.10` : If the current behavior is a bug, please provide the steps to reproduce. More of a performance issue than a bug, so I'm not sure I will be able to reproduce it outside of a production environment. If someone knows any way to benchmark this I will be happy to try π What is the expected behavior? Having response times around ~20ms for 150 RPS. Please mention your node.js, mongoose and MongoDB version.* node.js 8.9.4, mongoose 5.0.1, MongoDB 3.4.10. If it's of any use, we are on a cloud managed (Atlas) sharded cluster.
couldn't find DSO to load: libhermes.so caused by
Description [code block] Version 0.71.0 Output of `npx react-native info` System: OS: macOS 13.1 CPU: (10) arm64 Apple M1 Pro Memory: 70.36 MB / 16.00 GB Shell: 5.8.1 - /bin/zsh Binaries: Node: 16.16.0 - ~/.nvm/versions/node/v16.16.0/bin/node Yarn: 1.22.19 - /opt/homebrew/bin/yarn npm: 8.11.0 - ~/.nvm/versions/node/v16.16.0/bin/npm Watchman: Not Found Managers: CocoaPods: 1.11.3 - /opt/homebrew/lib/ruby/gems/2.7.0/bin/pod SDKs: iOS SDK: Platforms: DriverKit 22.2, iOS 16.2, macOS 13.1, tvOS 16.1, watchOS 9.1 Android SDK: Not Found IDEs: Android Studio: 2021.3 AI-213.7172.25.2113.9123335 Xcode: 14.2/14C18 - /usr/bin/xcodebuild Languages: Java: 17.0.4 - /usr/bin/javac npmPackages: @react-native-community/cli: Not Found react: 18.2.0 => 18.2.0 react-native: 0.71.0 => 0.71.0 react-native-macos: Not Found npmGlobalPackages: react-native: Not Found Steps to reproduce When building towards Android no issues on IOS. Snack, code example, screenshot, or link to a repository When building towards Android no issues on IOS.
feat: Streaming Mutations / Queries
Describe the feature you'd like to request Many upstream APIs (e.g. OpenAI's GPT) stream responses to the client. It would be great if TRPC supported such streaming requests. Describe the solution you'd like to see I'd imagine this would look like a mutation or query where the resolver is an async generator. Example: [code block] Describe alternate solutions An alternative is to set up subscriptions and map the mutation inputs to some identifier in the subscription. The problem with this approach is (a) it depends on websockets, which are difficult to support from an infrastructure perspective, (b) the race conditions between establishing a socket connection and corresponding this with mutations/queries can be miserable. Additional information I'd mostly like to understand any suggestions for how to approach this and whether you'd support a PR. Creating a `streamingMutation` procedure type, for instance, is fairly invasive but may be the correct approach. π¨βπ§βπ¦ Contributing - [X] πββοΈ Yes, I'd be down to file a PR implementing this feature! [code block]
crash in accessing res.connection.pair
Hi, I am using the latest http-proxy in hostNameOnly mode and it kept crashing in https://github.com/nodejitsu/node-http-proxy/blob/master/lib/node-http-proxy.js#L372 I dont' have the stack trace right now, but it said something like res.connection was undefined. . I just changed res to req in that line and everything started to work. I am very new to nodejs, so not sure what may be happening, but is this just a typo? I would think you would use the request object to set the header on the proxied request, so it does seem like one. Regards Qasim
There needs to be a wildcard feature in event handling
Note: The 'anything' event in docs can be confused as a global event, but of course isnt. There should exist something to the effect of: [code block] See: http://stackoverflow.com/questions/10405070/socket-io-client-respond-to-all-events-with-one-handler http://stackoverflow.com/questions/32552155/how-do-i-handle-all-incoming-socket-events-rather-than-a-specific-one http://stackoverflow.com/questions/32552481/did-the-socket-io-global-event-end-up-in-release?noredirect=1#comment52961409_32552481 Because there's no support for this, developers are having to modify Socket.io client side (note, the node.js patch for this is not applicable client side) with code like this: [code block]
[ServerErrors][TypeScript] 5.1.0-dev.20230305
The following errors were reported by 5.1.0-dev.20230305 Pipeline that generated this bug Logs for the pipeline run File that generated the pipeline This run considered 200 popular TS repos from GH (after skipping the top 0). <details> <summary>Successfully analyzed 188 of 200 visited repos</summary> | Outcome | Count | |---------|-------| | Detected interesting changes | 122 | | Detected no interesting changes | 66 | | Git clone failed | 1 | | Unknown failure | 11 | </details> Investigation Status | Repo | Errors | Outcome | |------|--------|---------| |akveo/ngx-admin|1| | |angular/angular-cli|1| | |angular/angular|1| | |ant-design/ant-design|1| | |apache/echarts|1| | |apache/superset|1| | |apollographql/apollo-client|1| | |baidu/amis|1| | |blitz-js/blitz|1| | |BrainJS/brain.js|1| | |BuilderIO/qwik|1| | |chakra-ui/chakra-ui|1| | |chartist-js/chartist|1| | |cheeriojs/cheerio|1| | |codex-team/editor.js|1| | |colinhacks/zod|1| | |compiler-explorer/compiler-explorer|1| | |conwnet/github1s|1| | |CopyTranslator/CopyTranslator|1| | |date-fns/date-fns|1| | |desktop/desktop|1| | |elastic/kibana|1| | |electron-userland/electron-builder|1| | |Eugeny/tabby|1| | |excalidraw/excalidraw|1| | |facebook/docusaurus|1| | |felixrieseberg/windows95|1| | |fingerprintjs/fingerprintjs|1| | |floating-ui/floating-ui|1| | |foambubble/foam|1| | |formatjs/formatjs|1| | |framer/motion|1| | |GeekyAnts/NativeBase|1| | |gothinkster/realworld|1| | |grafana/grafana|1| | |GrapesJS/grapesjs|1| | |graphql/
Import resources into Terraform
Use case , manage infrastructure environments both existing and created from scratch, by same terraform configs. For example in development environments we want to create everything from scratch and destroy everything then we finish working with it. In production we want to be able to add new resources to it with terraform but not conflict with resources that already exist but not managed by terrafrom yet. For ex: [code block] } if terraform has not yet created this resource it will check if subnet_b_id is defined and if resource with such id exists in provider api it will import this resource into terraform based on data in api instead of creating it.
io.of(nsp).to(room).emit('test',{testdata: 1}); not being received
Hey, running 1.4.5 and when I try to emit an event triggered from a REST API call to a room, the connected sockets in the room aren't receiving the event debug mode shows it's sending as the following as undefined/driver/v1 +4ms My current work around is to find each socket in the room and emit the event to each socket
src\bitvec.c(43): warning C4141: 'dllexport': used more than once
I'm installing it on Windows 11 with PostgreSQL 16. I got the git clone to work, but when I ran the nmake command (nmake /F Makefile.win) I'm getting a few errors: src\bitvec.c(43): warning C4141: 'dllexport': used more than once C:\Program Files\PostgreSQL\16\include\server\access/tupmacs.h(65): error C2196: case value '4' already used C:\Program Files\PostgreSQL\16\include\server\access/tupmacs.h(197): error C2196: case value '4' already used src\hnsw.c(190): warning C4141: 'dllexport': used more than once NMAKE : fatal error U1077: 'cl /nologo /I"C:\Program Files\PostgreSQL\16\include\server\port\win32_msvc" /I"C:\Program Files\PostgreSQL\16\include\server\port\win32" /I"C:\Program Files\PostgreSQL\16\include\server" /I"C:\Program Files\PostgreSQL\16\include" /O2 /fp:fast /c src\hnsw.c /Fosrc\hnsw.obj' : return code '0x2' Stop. Can anyone please provide assistance? Thank you. Aaron
Add ability to limit bandwidth for S3 uploads/downloads
Original from #1078, this is a feature request to add the ability for the `aws s3` commands to limit the amount of bandwidth used for uploads and downloads. In the referenced issue, it was specifically mentioned that some ISPs charge fees if you go above a specific mbps, so users need the ability to limit bandwidth. I imagine this is something we'd only need to add to the `aws s3` commands.
High rate of "client not handshaken should reconnect"
I am running a chat server with node.js / socket.io and have a lot of "client not handshaken" warnings. In the peak time there are around 1.000 to 3.000 open TCP connections. For debugging purposes I plotted the graph of actions succeeding the server-side "set close timeout" event, because the warnings are always preceded by those, so the format is: <pre> Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - set close timeout for client 2098080741242069807 Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - xhr-polling closed due to exceeded duration -- Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - set close timeout for client 330973265416677743 Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - setting request GET /socket.io/1/xhr-polling -- Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - set close timeout for client 10595896332140683620 Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - cleared close timeout for client 10595896332140683620 -- Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - set close timeout for client 21320636051749821863 Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - cleared close timeout for client 21320636051749821863 -- Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - set close timeout for client 3331715441803393577 Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) warn - client not handshaken client should reconnect </pre> The following plot explained: - x axis: The passed time between the first and last seeing of a client id. - y axis: total
Updating the path '__v' would create a conflict at '__v'
It happens if I call findOneAndUpdate or findByIdAndUpdate. If I remove __v from document, it does not appear. ../node_modules/mongoose/lib/query.js:3119:9 MongoDB shell version v3.6.1 Mongoose v.4.13.9 Node v.8.9.4
Error when uploading Youtube video using Electron : Invalid multipart request with 0 mime parts.
This is related to https://github.com/googleapis/google-api-nodejs-client/issues/1083 It seems with the latest versions of googleapis the package switched from using axios to gaxios - this has caused this error to resurface when using the example code on an Electron app. How should this be handled? Environment details - OS: osx - Node.js version: v12.12.0 - npm version: 6.13.0 - `googleapis` version: "googleapis": "^45.0.0", Steps to reproduce 1. Use code from samples/youtube/upload.js inside Electron app 2. Receive `Invalid multipart request with 0 mime parts.` error upon request
How to stop streaming
Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug I use stream.abort(); to stop receive from api , but i have exception below [code block] I have been follow to guide in the document > If you need to cancel a stream, you can break from a for await loop or call `stream.abort()`. To Reproduce [code block] Nope Code snippets _No response_ OS Ubuntu Node version 16.15.1 Library version v4.28.0
Error: connect ETIMEDOUT ----use bluebird
I use bluebird replace native promise, error happened! [code block] envriment: [code block] my test code [code block]
[6.x] WebAssembly.instantiate(): Import #0 "./query_compiler_bg.js": module is not an object or function]
Bug description Nitro + Prisma 6.6.0 + D1 does not work Severity π¨ Critical: Data loss, app crash, security issue Reproduction https://github.com/medz/nitro-prisma-6.6 Expected vs. Actual Behavior Expected behavior: WASM modules can be parsed normally Frequency Consistently reproducible Does this occur in development or production? Only in development (e.g., CLI tools, migrations, Prisma Studio) Is this a regression? Prisma: 6.6.0 Nitro: 2.11.9 wrangler: 4.10.0 Workaround - Prisma Schema & Queries [code block] [code block] Logs & Debug Info [code block] Environment & Setup - OS: - Database: - Node.js version: Prisma Version [code block]
Batch API does not support cache_control
Batch API failed with error message > messages.5.content.0.text.cache_control: Extra inputs are not permitted I have both indicated `betas` in `params` and `client.beta.messages.batches.create` My prompts look like [code block]
Error: socket hang up
I'm getting errors that look like this happening every hour or two, causing systemd to restart my nodeProxyServer.js script: [code block] May it be related to node-http-proxy, or something else?
Parse.com outdated module : Unable to complete HTTP request
Hi, Parse.com uses an old version of Twilio module. Since a few hours, this outdated version doesn't work anymore. Is this possible to provide back backward-compatibility as it used to work until today ? That's not really possible for us to migrate now to Parse-Server (and so, with an updated Twilio module). Thanks ! Edit : As @hramos pointed out, Parse.com is not using official Twilio npm module. I leave this message hoping someone at Twilio will try to update server to provide back compatibility...
Error with nodejs 0.10.8
I get this warning with each simple socket.io app I make: warn - websocket parser error: reserved fields must be empty Any clue how I can fix this?