All Issues
24,993 verified issues
feat: Streaming Mutations / Queries
Describe the feature you'd like to request Many upstream APIs (e.g. OpenAI's GPT) stream responses to the client. It would be great if TRPC supported such streaming requests. Describe the solution you'd like to see I'd imagine this would look like a mutation or query where the resolver is an async generator. Example: [code block] Describe alternate solutions An alternative is to set up subscriptions and map the mutation inputs to some identifier in the subscription. The problem with this approach is (a) it depends on websockets, which are difficult to support from an infrastructure perspective, (b) the race conditions between establishing a socket connection and corresponding this with mutations/queries can be miserable. Additional information I'd mostly like to understand any suggestions for how to approach this and whether you'd support a PR. Creating a `streamingMutation` procedure type, for instance, is fairly invasive but may be the correct approach. ๐จโ๐งโ๐ฆ Contributing - [X] ๐โโ๏ธ Yes, I'd be down to file a PR implementing this feature! [code block]
crash in accessing res.connection.pair
Hi, I am using the latest http-proxy in hostNameOnly mode and it kept crashing in https://github.com/nodejitsu/node-http-proxy/blob/master/lib/node-http-proxy.js#L372 I dont' have the stack trace right now, but it said something like res.connection was undefined. . I just changed res to req in that line and everything started to work. I am very new to nodejs, so not sure what may be happening, but is this just a typo? I would think you would use the request object to set the header on the proxied request, so it does seem like one. Regards Qasim
There needs to be a wildcard feature in event handling
Note: The 'anything' event in docs can be confused as a global event, but of course isnt. There should exist something to the effect of: [code block] See: http://stackoverflow.com/questions/10405070/socket-io-client-respond-to-all-events-with-one-handler http://stackoverflow.com/questions/32552155/how-do-i-handle-all-incoming-socket-events-rather-than-a-specific-one http://stackoverflow.com/questions/32552481/did-the-socket-io-global-event-end-up-in-release?noredirect=1#comment52961409_32552481 Because there's no support for this, developers are having to modify Socket.io client side (note, the node.js patch for this is not applicable client side) with code like this: [code block]
[ServerErrors][TypeScript] 5.1.0-dev.20230305
The following errors were reported by 5.1.0-dev.20230305 Pipeline that generated this bug Logs for the pipeline run File that generated the pipeline This run considered 200 popular TS repos from GH (after skipping the top 0). <details> <summary>Successfully analyzed 188 of 200 visited repos</summary> | Outcome | Count | |---------|-------| | Detected interesting changes | 122 | | Detected no interesting changes | 66 | | Git clone failed | 1 | | Unknown failure | 11 | </details> Investigation Status | Repo | Errors | Outcome | |------|--------|---------| |akveo/ngx-admin|1| | |angular/angular-cli|1| | |angular/angular|1| | |ant-design/ant-design|1| | |apache/echarts|1| | |apache/superset|1| | |apollographql/apollo-client|1| | |baidu/amis|1| | |blitz-js/blitz|1| | |BrainJS/brain.js|1| | |BuilderIO/qwik|1| | |chakra-ui/chakra-ui|1| | |chartist-js/chartist|1| | |cheeriojs/cheerio|1| | |codex-team/editor.js|1| | |colinhacks/zod|1| | |compiler-explorer/compiler-explorer|1| | |conwnet/github1s|1| | |CopyTranslator/CopyTranslator|1| | |date-fns/date-fns|1| | |desktop/desktop|1| | |elastic/kibana|1| | |electron-userland/electron-builder|1| | |Eugeny/tabby|1| | |excalidraw/excalidraw|1| | |facebook/docusaurus|1| | |felixrieseberg/windows95|1| | |fingerprintjs/fingerprintjs|1| | |floating-ui/floating-ui|1| | |foambubble/foam|1| | |formatjs/formatjs|1| | |framer/motion|1| | |GeekyAnts/NativeBase|1| | |gothinkster/realworld|1| | |grafana/grafana|1| | |GrapesJS/grapesjs|1| | |graphql/
Import resources into Terraform
Use case , manage infrastructure environments both existing and created from scratch, by same terraform configs. For example in development environments we want to create everything from scratch and destroy everything then we finish working with it. In production we want to be able to add new resources to it with terraform but not conflict with resources that already exist but not managed by terrafrom yet. For ex: [code block] } if terraform has not yet created this resource it will check if subnet_b_id is defined and if resource with such id exists in provider api it will import this resource into terraform based on data in api instead of creating it.
io.of(nsp).to(room).emit('test',{testdata: 1}); not being received
Hey, running 1.4.5 and when I try to emit an event triggered from a REST API call to a room, the connected sockets in the room aren't receiving the event debug mode shows it's sending as the following as undefined/driver/v1 +4ms My current work around is to find each socket in the room and emit the event to each socket
src\bitvec.c(43): warning C4141: 'dllexport': used more than once
I'm installing it on Windows 11 with PostgreSQL 16. I got the git clone to work, but when I ran the nmake command (nmake /F Makefile.win) I'm getting a few errors: src\bitvec.c(43): warning C4141: 'dllexport': used more than once C:\Program Files\PostgreSQL\16\include\server\access/tupmacs.h(65): error C2196: case value '4' already used C:\Program Files\PostgreSQL\16\include\server\access/tupmacs.h(197): error C2196: case value '4' already used src\hnsw.c(190): warning C4141: 'dllexport': used more than once NMAKE : fatal error U1077: 'cl /nologo /I"C:\Program Files\PostgreSQL\16\include\server\port\win32_msvc" /I"C:\Program Files\PostgreSQL\16\include\server\port\win32" /I"C:\Program Files\PostgreSQL\16\include\server" /I"C:\Program Files\PostgreSQL\16\include" /O2 /fp:fast /c src\hnsw.c /Fosrc\hnsw.obj' : return code '0x2' Stop. Can anyone please provide assistance? Thank you. Aaron
Add ability to limit bandwidth for S3 uploads/downloads
Original from #1078, this is a feature request to add the ability for the `aws s3` commands to limit the amount of bandwidth used for uploads and downloads. In the referenced issue, it was specifically mentioned that some ISPs charge fees if you go above a specific mbps, so users need the ability to limit bandwidth. I imagine this is something we'd only need to add to the `aws s3` commands.
High rate of "client not handshaken should reconnect"
I am running a chat server with node.js / socket.io and have a lot of "client not handshaken" warnings. In the peak time there are around 1.000 to 3.000 open TCP connections. For debugging purposes I plotted the graph of actions succeeding the server-side "set close timeout" event, because the warnings are always preceded by those, so the format is: <pre> Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - set close timeout for client 2098080741242069807 Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - xhr-polling closed due to exceeded duration -- Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - set close timeout for client 330973265416677743 Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - setting request GET /socket.io/1/xhr-polling -- Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - set close timeout for client 10595896332140683620 Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - cleared close timeout for client 10595896332140683620 -- Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - set close timeout for client 21320636051749821863 Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - cleared close timeout for client 21320636051749821863 -- Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) debug - set close timeout for client 3331715441803393577 Mon Aug 01 2011 08:16:01 GMT+0200 (CEST) warn - client not handshaken client should reconnect </pre> The following plot explained: - x axis: The passed time between the first and last seeing of a client id. - y axis: total
Updating the path '__v' would create a conflict at '__v'
It happens if I call findOneAndUpdate or findByIdAndUpdate. If I remove __v from document, it does not appear. ../node_modules/mongoose/lib/query.js:3119:9 MongoDB shell version v3.6.1 Mongoose v.4.13.9 Node v.8.9.4
Error when uploading Youtube video using Electron : Invalid multipart request with 0 mime parts.
This is related to https://github.com/googleapis/google-api-nodejs-client/issues/1083 It seems with the latest versions of googleapis the package switched from using axios to gaxios - this has caused this error to resurface when using the example code on an Electron app. How should this be handled? Environment details - OS: osx - Node.js version: v12.12.0 - npm version: 6.13.0 - `googleapis` version: "googleapis": "^45.0.0", Steps to reproduce 1. Use code from samples/youtube/upload.js inside Electron app 2. Receive `Invalid multipart request with 0 mime parts.` error upon request
How to stop streaming
Confirm this is a Node library issue and not an underlying OpenAI API issue - [X] This is an issue with the Node library Describe the bug I use stream.abort(); to stop receive from api , but i have exception below [code block] I have been follow to guide in the document > If you need to cancel a stream, you can break from a for await loop or call `stream.abort()`. To Reproduce [code block] Nope Code snippets _No response_ OS Ubuntu Node version 16.15.1 Library version v4.28.0
Error: connect ETIMEDOUT ----use bluebird
I use bluebird replace native promise, error happened! [code block] envriment: [code block] my test code [code block]
[6.x] WebAssembly.instantiate(): Import #0 "./query_compiler_bg.js": module is not an object or function]
Bug description Nitro + Prisma 6.6.0 + D1 does not work Severity ๐จ Critical: Data loss, app crash, security issue Reproduction https://github.com/medz/nitro-prisma-6.6 Expected vs. Actual Behavior Expected behavior: WASM modules can be parsed normally Frequency Consistently reproducible Does this occur in development or production? Only in development (e.g., CLI tools, migrations, Prisma Studio) Is this a regression? Prisma: 6.6.0 Nitro: 2.11.9 wrangler: 4.10.0 Workaround - Prisma Schema & Queries [code block] [code block] Logs & Debug Info [code block] Environment & Setup - OS: - Database: - Node.js version: Prisma Version [code block]
Batch API does not support cache_control
Batch API failed with error message > messages.5.content.0.text.cache_control: Extra inputs are not permitted I have both indicated `betas` in `params` and `client.beta.messages.batches.create` My prompts look like [code block]
Error: socket hang up
I'm getting errors that look like this happening every hour or two, causing systemd to restart my nodeProxyServer.js script: [code block] May it be related to node-http-proxy, or something else?
Parse.com outdated module : Unable to complete HTTP request
Hi, Parse.com uses an old version of Twilio module. Since a few hours, this outdated version doesn't work anymore. Is this possible to provide back backward-compatibility as it used to work until today ? That's not really possible for us to migrate now to Parse-Server (and so, with an updated Twilio module). Thanks ! Edit : As @hramos pointed out, Parse.com is not using official Twilio npm module. I leave this message hoping someone at Twilio will try to update server to provide back compatibility...
Error with nodejs 0.10.8
I get this warning with each simple socket.io app I make: warn - websocket parser error: reserved fields must be empty Any clue how I can fix this?
Procedure specific custom request headers
Discussed in https://github.com/trpc/trpc/discussions/2017 <div type='discussions-op-text'> <sup>Originally posted by skovhus June 17, 2022</sup> From the documentation and code example it seems like Client side HTTP headers can only be configured in the tRPC client and not call site when doing a query or mutation. But there are use cases where you want to pass in specific headers for a single procedure. What is the recommended way here? Is that something we lack support for? Example use case: Elevated privileges that would update the authorization header for a single procedure. </div>
ECR image push fails: image tag already exists in immutable repository
Pushing a Docker image to an AWS ECR repository with immutable tags fails because the tag (e.g. the version from package.json) was already pushed previously. ECR with immutable tags rejects any push that would overwrite an existing tag. The fix is to bump the version in package.json before every push to a production ECR repository. CI/CD pipelines that don't auto-bump versions will fail repeatedly on the same tag.