Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Instant API – Build type-safe web APIs with JavaScript (github.com/instant-dev)
102 points by keithwhor on Oct 26, 2023 | hide | past | favorite | 78 comments
Hey there HN! I just wrapped up an all-day documentation spree for Instant API, a JavaScript framework for easily building web APIs that implements type safety at the HTTP interface. It uses a function-as-a-service approach combined with a slightly modified JSDoc spec to automatically enforce parameter and schema validation before HTTP requests make it through to your code. You just write comments above your endpoint functions and voila, enforced type safety contracts between your API endpoints and your developers. OpenAPI specifications are generated automatically as a byproduct of simply building your endpoints.

This eliminates the need for most schema validation libraries, automates user input sanitization, and prevents your team from developing carpal tunnel syndrome trying to keep your OpenAPI / Swagger specifications up-to-date. We developed it as a side effect of building our own serverless API platform, where it has scaled to handle over 100M requests from users per day. This is an early release, but Instant API has had about six years of consistent development as proprietary software to get to the point it is at today. We have spent the last couple of months modernizing it.

We have command line tools that make building, testing, and deploying Instant API to Vercel or AWS a breeze. Additionally, there is a sister package, Instant ORM, that provides a Ruby on Rails-like model management, migration and querying suite for Postgres. Everything you need to build an API from scratch.

Finally -- LLMs and chatbots are the the top of the hype cycle right now. We aren't immune, and have developed a number of LLM-integrated tools ourselves. Instant API comes with first-class Server-Sent Event support for building your own assistants and a few other nifty tools that help with AI integrations and chatbot webhook responses, like executing endpoints as background jobs.

This has been a huge labor of love -- it has become our "JavaScript on Rails" suite -- and we hope y'all enjoy it!



Neat! Does this framework have any tools for content negotiation? I get that you could do this in the handlers themselves, but in my experience different representations are often so different in implementation that aside from maybe a tiny bit of shared code to produce some underlying dataset, the actual rendering of that data shares very little.

As an example: `application/json` and `text/html` respresentations of a single resource may render some of the same bits of data, but the actual rendering process is entirely different. Even if the content type is the same, you may render completely different output based on other parameters, like a header used to select between different profiles of said content type, or even just something simple as `accept-language`.

If it's not possible right now, is there any plans to support routing based not just on method and URL, but on headers as well?


I’m not quite sure I grok the use case, can you give a more precise example? Either way, if you’re interested in the approach I *very welcome* issues, PRs and contributors. I think it would be pretty straightforward to create tests for whatever you’re looking for.


Here's a simple use case: human friendly and machine friendly representations at the same path.

For example, let's say I have an api at `/weather` which when requested with the header `accept: application/json` returns something like:

    { "temperature": 25, "unit": "c" }
Now if I open that same path in a browser, I get an html page displaying the day's temperature in a nicely formatted way for humans to read.

Or another case, I want to version an API, such that I can provide different representations based on the version requested. So I look for `accept-version` and if it's `v1` for example I get the `v1` version. GitHub does this, as a real world example: https://docs.github.com/en/rest/overview/api-versions?apiVer...


Entirely possible! I mean right now we restrict endpoint definitions to HTTP methods but in theory you could define unique endpoints for any combination of parameters. What I would want to find is the most effective / used combinations as opposed to unlimited flexibility.


I spotted a bug in the README's first example:

  @param {number{-90,90}} coords.lng Longitude
The range for longitude is incorrect, and only covers half the globe.

How and when are these kinds of types checked? Would this error only be found at runtime?


Thanks for the tip! Fixed this; just missed this when writing the example.

This error would only be found if you wrote a test for it. The framework doesn't have an AI integration that checks whether or not your types are consistent with real world applications... but could.


I know the answer is a bit tongue in cheek, but that would be very valuable and cool, just in case someone from Cursor or Refact is reading this thread ^^


Neat. I like the fresh take on validation. It’s not necessarily unique, but it may be cleaner / more efficient than previous attempts. The mixing of backend processing and the http layer reminds me of my CouchDB days. @stream seems potentially tricky to implement at scale but very useful. And a few nits here and there (eg maybe return a 202 instead of a 200 on accept), but overall pretty slick. Any numbers on performance and/or scalability in production?


@stream is actually straightforward in Node; it’s just part of the http request. Typically Instant API waits until the full request is processed to return headers and body as the endpoint implementation could modify headers; but with streaming headers are sent immediately and then the rest of the response streamed with .stream() injecting text.

In serverless environments — where we have used the framework as a gateway with endpoints being executed inside of Lambdas — streaming is accomplished using Redis broadcasting and channels are managed via UUIDs. We may open source this at some point but an enterprising engineer could fork and implement this.

Re; scaling. As mentioned in README we’ve scaled horizontally to 100M requests / day or about 1,157 RPS without issue on older, proprietary versions. Candidly this early release is designed for usability over performance but I wouldn’t be surprised in either direction. :)


Please don’t call this mess “type-safe”


I’d never suggest this is “mess type” safe. In fact I strongly discourage mess types.

Edit: Parent added quotation marks and hyphens after my comment. I stand proudly by my cutesy remark.


How about calling it doc-driven API generation?


I think it’s very cool. We went a similar, yet different route into the realm of code-generation rather than building our own api framework. Basically we have something which can generate, types, SQLscripts, data and type conversion and security and an express API service/controller based on a mermaid table input.

I think your way is better, but we’re very reliant on express because of our heavy Azure integration and how that’s just easier with express because Microsoft has a lot of tools for it.


When you say “we” who are you referring to? Would love to learn more about what you’ve built. I haven’t built easy deployment rails for Azure but if you’re interested in playing with our solution I’d be down for a call! email keith@instant.dev


I was looking at ChatGPT plugins today and they use the OpenAPI convention. Is it called OpenAPI because it was created by "Open"AI or it's something completely independent of OpenAI that they chose to adopt?


The naming is completely coincidental and probably a little confusing / unfortunate. OpenAPI is a rebranding of Swagger, a really popular open source API specification. Incidentally the OpenAPI initiative only predates OpenAI by a month so I don’t think Sam, Greg & co could have known better at the time.

But OpenAPI is the “gold standard” machine readable API specification format so it makes sense that OpenAI would rely on it!


> But OpenAPI is the “gold standard” machine readable API specification format

I remember when that would have been WSDL, with something like SoapUI - where you'd feed in the service description file from whatever service you want to interact with and would get a functional API client. It also had codegen that let you end up with full client stubs in Java or another language, or are able to generate the WSDL file from your server code as well: https://www.soapui.org/docs/soap-and-wsdl/working-with-wsdls...

It's nice to see OpenAPI/Swagger finally catching up, because to me the codegen aspects felt insufficient there for the longest time - if your server code determines what responses to requests it will return, why on earth would you ever want to write the OpenAPI specs manually? It's the same as with ORMs - if you already have a database schema setup on a server somewhere (with SQL migrations, say with dbmate), all of your local entity mappings should be easily generated in a schema-first approach, you shouldn't have to write a single line of code for that.

The latest service I'm building is in .NET and they actually have that covered really nicely: https://learn.microsoft.com/en-us/aspnet/core/tutorials/gett...

And the Rider IDE there also has nice schema-first tooling, even if a bit niche: https://blog.jetbrains.com/dotnet/2022/01/31/entity-framewor...

Model driven development and codegen feel like they definitely belong for boilerplate heavy uses cases like this, where you can also almost perfectly describe the end result that you need based on what you have!


Many years ago I was connecting APIs from ~100 companies into our application, they all essentially did the same thing -- update information in our system based on a tracking number. At this time, about half were web APIs and half were WSDLs. The WSDL APIs were by and large the easiest to work with, I didn't have to write much of any code. On the other hand, the web APIs required me to have a custom flow for each and every one and they could break at any point without me knowing.

I've seen a lot of hate on WSDLs throughout the years but they were honestly really productive for getting work done.


OpenAPI predates OpenAI by quite some time. OpenAI has adopted usage of OpenAPI.

I do find myself accidentally saying one and meaning the other, in both directions too!


Actually only a month! I just looked it up and was surprised myself.


According to Wikipedia[1] it looks like the rebranding effort from Swagger -> OpenAPI was in November 2015, but the OpenAPI Specification was only officially renamed on 1st January 2016. OpenAI was founded on 10th December 2015. So it depends how pedantic we are being -- technically on branding, OpenAI predates OpenAPI by 21 days. In terms of the spec itself (as in, the thing they adopted), Swagger v1 was released August 2011.

[1] https://en.wikipedia.org/wiki/OpenAPI_Specification


Worry not, this is Hacker News. Pedantry is more valuable as currency than H100s here.


Happy to see more projects embrace OpenAPI.


<3


would this be "pydantic-for-javascript" ?



Good call! Do you think Python developers would feel more comfortable venturing into JavaScript with this framing?


Nice work, Keith.


Seems like you put a lot of work into this and as such I'm hesitant to criticize. But what in the world made you think this is superior to using an actual type-safe language like TypeScript?

  /**
   * Streams results for our lovable assistant
   * @param {string} query The question for our assistant
   * @stream {object}   chunk
   * @stream {string}   chunk.id
   * @stream {string}   chunk.object
   * @stream {integer}  chunk.created
   * @stream {string}   chunk.model
   * @stream {object[]} chunk.choices
   * @stream {integer}  chunk.choices[].index
   * @stream {object}   chunk.choices[].delta
   * @stream {?string}  chunk.choices[].delta.role
   * @stream {?string}  chunk.choices[].delta.content
   * @returns {object} message
   * @returns {string} message.content
   */
Is a developer supposed to type that out for every single endpoint that uses the chunk type?

Also do I understand it correctly that you need to use Instant API everywhere in order to get this "type safety" and as such any consumer implementation would be basically limited to JavaScript?


I don't have an opinion about this particular project, but this just JSDoc, as mentioned in the readme: "Simply write a JSDoc-compliant comment block for a function that represents your API endpoint".

Yes, it's verbose, but its advantages over Typescript is that it work without any compilation step, while being standard enough to give helpful type-hint on any good enough IDE.

Of course, if your project uses Typescript already, you should use Typescript instead, but if you just want to do a simple web page without any compilation step/package.json shenanigan, it's sometime nice to reach for JSDoc.

EDIT: to answer your question, in JSDoc you can typedef your own custom objects[0], so you thankfully don't have to repeat your types everywhere.

[0] https://devhints.io/jsdoc


It is type safety at runtime. Typescript gives you some type safety at develop time (like jsdoc also can do) But if I read it right, this helps you to generate OpenAPI spec to validate the api endpoints, to get easy type safety at runtime. See also e.g. https://openapistack.co/docs/openapi-backend/intro/ that is also helping to be type safe at runtime. Or Nest can also generate OpenAPI als validation based on it. Only difference is that OpenAPI stack is design first, and Instant is code first. And Nest use decorators and Instant JSDoc. It is more like https://www.npmjs.com/package/swagger-jsdoc that generate the OpenAPI from JSDoc.

To use Typescript or not doesnt matter, it is about runtime validation. The one like the native JS way with JSDoc, the other likes Typescript and doesnt matter the build step for the API.


It is worth noting there are good options out there to get both compile time type safety and runtime validation using TypeScript.

Personally I’m a fan of ts-rest: https://ts-rest.com/

Write a specification of your endpoints in TypeScript, and from that one spec you get:

- Server side validation of requests/responses

- A TypeScript API client

- Specialized clients, if you like, for example a react-query client if you like using react-query

- Auto-generated docs (OAS)

- TypeScript types for requests and responses to use in your code

IMO a lot more maintainable than a jsdoc based approach. For example, you can define a type for a Person, and then re-use that type in the responses of all Person-related endpoints, and even in POST/PUT/PATCH bodies (saying things like “the POST body is a person Person, but omit the id field”). With jsdoc you’re repeating that definition a tonne, AND you’re lacking compile time type safety.


Yes, I was wondering about that too. If you put in the effort to parse the docblock into runtime validation, why not pick up the existing utilities to convert TypeScript types into runtime code during the build? That would be much more powerful, with the added benefit of having type safety for all other code, and a familiar syntax for developers.


…or just use Zod?


And if you like Zod, you might as well use this: https://github.com/asteasolutions/zod-to-openapi

It converts Zod types to OpenAPI specification.


Or json schema, this https://www.npmjs.com/package/zod-to-json-schema

Json schema is more universal, and it can be used in .NET, Java, etc.


I don't like writing type definitions with zod's DSL.

I want to write definitions in the "first party syntax" - which both JSDoc and Typescript types are - and have everything else generated out from that.


Everybody seems to want that, except for the people who work on Typescript, who argue it'd be the wrong tool.

Typescript's types simply aren't able to provide input validation, both because they don't exist at runtime and for lack of power ("number must be 0..10").

At least with Zod, you also get Typescript types, from the same schema.


I like the sentiment, but you’ve gotta admit that being able to skip the “generation” step has its benefits


The only way to can skip that generation is if the zod type is your primary schema, which is only possible if your API is in javascript.


Yeah, that’s what I was thinking of. There are a few alternatives though, so I referred to existing tooling in general.


No not necessarily! I just haven’t written type import support yet; though you’re welcome to help.

And no — type safety is applied at the HTTP interface. API consumers need no special library to get all the benefits.


> And no — type safety is applied at the HTTP interface.

So it's basically validation middleware. The HTTP protocol does not define any such thing.


No, and neither does Typescript. One of the biggest weaknesses in Typescript is that it can't validate data across the wire. Which is a major use case for the language.

There are various tools for it already, but the JS ecosystem has always been up for yet another framework.


I hear you, but I don’t think this is necessarily true. It does leave type checking to your parsing logic, but the compiler can give you strong guarantees that you’re being defensive about untrusted structured data.

I use a JsonObject to do type narrowing and generate appropriate error messages when I receive invalid data over the wire: https://www.npmjs.com/package/@retrohacker/json-types

In every codebase I’ve added this to, I’ve found invalid parsing logic.

I feel like this type, or something similar, should be bundled in the official TypeScript project. That untested data comes out as “any” is not a good developer experience in my opinion.

And “unknown” is basically broken for type narrowing.


> And “unknown” is basically broken for type narrowing.

How? Typescript 4.9 improved it for type narrowing significantly. https://devblogs.microsoft.com/typescript/announcing-typescr...

Last weekend I actually hacked an an experiment to codegen narrowing unknown to a specific type and it works pretty well https://github.com/joshhunt/codegen-json-validator-experimen...


I haven't worked much in typescript over the last 10 months, but last time I tried type narrowing on untrusted data typecast to unknown I ran into a handful of problems documented here:

https://github.com/microsoft/TypeScript/issues/25720

Working with untrusted types in a GraphQL project lead me to creating that JsonObject module.

I wrote up what I was thinking at the time here: http://www.blankenship.io/essays/2022-12-01/

Maybe this has improved.


Yes. The main one is `foo in obj` now correctly narrows to `unknown & { foo: unknown }`. This allows you to correctly narrow an unknown to a fully typed object, as my code sample shows :)


The compiler can't help you at all when reading and validating data at runtime.

You mean you could use a library that helps you with checking the shapes?


The compiler will tell you when you are accessing data you haven't validated yet.

Using the json-types module above, you assign your incoming json object to the JsonObject type. Then you type narrow by validating the payload contains the properties you are using.

So you can:

    const user = (await body.json()) as JsonObject;

    // This is properly type narrowed and will work
    if (typeof user.emailAddress === "string") {
      // do something with user.emailAddress
    }

    // The compiler will let you know you can't trust
    // user.name to be a string at this point. This
    // would have generated an exception at runtime
    // for a malformed payload if the compiler didn't
    // catch it.
    user.username.split(" ")

    // You can also do type narrowing with early returns
    if(typeof user.age !== number) {
       return new Error("user.age is expected to be a number"); 
    }

    // You can now use user.age with type safety
    ```


> No, and neither does Typescript. One of the biggest weaknesses in Typescript is that it can't validate data across the wire. Which is a major use case for the language.

Not sure what particular use case you have in mind but which language has built-in functionality to validate chunks of bytes without some kind of type definition? To me this is an orthogonal problem to language design. Every web framework has to serialize and deserialize data.

What I was getting at is that TypeScript is perfectly suited to define type safe data structures wihtout reinventing the wheel. If you're going to parse type definitions for your validation middleware you might as well use something that can also applied to your business logic.


You should check out tsoa, ts-rest, zodios and feTS


Side note: "Seems like you put a lot of work into this and as such I'm hesitant to criticize. But what in the world made you think [...]"

It is absolutely possible to question the necessity of a or motivation behind a project, without attacking someone. It's not that OP did ruin a million dollar project. I find your comment quite harsh. Prefacing it with the very first sentence shows that you were quite aware of that. Not a good style, in my opinion.

//Despite the downvotes, I still stand by my opinion. I don't get them, but that's fine.


You're not wrong and I tried to mellow my initial reaction when commenting. I don't feel like I attacked OP though. Getting some outside perspective is important.


Type Safety != Runtime Type Validation. Something being Type Safe means that I, the programmer, am Safe from making type mistakes while developing and maintaining the software. For that to be possible, you need a type system that is embedded in the language and the editor and can continously analyze all of your code types to be correct. Type safe means that any breaking change to any interface will immediately error and point me to all the locations I need to visit, and therefore I am safe from making any type errors.

While this is a neat project, it's not Type safe, as nothing in the pipeline will inform you about a breaking type error prior to running it and seeing the validation fail. That usually means in production. To catch those errors early before they hit production, you use a type system, and one that exists for JavaScript and works really well, is TypeScript, and I find it mind-boggling that there is no mention of it in this entire document discussing "type safety".


My claim is only type-safety at the HTTP layer. This claim is intentionally specific! As per another comment in this thread, pedantry is the most valuable currency on HN, so this sort of feedback is not only expected but welcome.


It's not pedantry, you're using the term wrong. I understand why it can be confusing if you are just using JavaScript and you haven't had the experience of type-safe codebases. Type-safe does not refer to runtime anything, it refers to the developer being able to refactor their interfaces safely, meaning that the type system informs you of any errors across the entire codebase and all of the dependants of that software. Type safety really means "developing with type safety". Type validation on the other hand, can mean runtime type validation, which is what this software is doing, runtime type validation at the HTTP layer, which is a cool project I admit, very useful stuff, but the term type-safe here is misleading.


Type safety originates from academia, where it refers to the fact that unexpected (ill-typed) operations are prevented. It does not necessarily require any sort of static checking, even though the two are often associated.

Type safety means that no operation receives "unexpected" inputs, but the definition of "unexpected" is not fixed in stone and can vary depending on the context, which makes it a fuzzy concept. If I only define + on integers, and allow `"a"+"b"`, then that's unexpected and must be prevented somehow to be type safe. If I define + on integers or strings, it is fine.

Here using "type safe" is warranted, because the system prevents you from making HTTP calls with unexpected arguments - even though the validation occurs at runtime and the result may be an exception.

(Edit: well to be strictly fair it's not really type-safe because nothing prevents the HTTP endpoint's expected arguments to change under your feet)


I have to stretch my imagination a lot to see how "type-safe" can mean "it throws at runtime".


The term "type safety" was coined in a context where "does random things then ultimately segfaults (if you are lucky)" is a possible alternative.


Fair enough; if I were to change the qualifiers associated with the project, what’s cool about what exists here and how would you describe it in a way that’s maximally appealing? Is “type-safe” an acceptable generalization given an audience less sophisticated than you or can I use a different framing to the same effect?


For this software to be appealing it would need to really be type-safe, meaning you'd need to use TypeScript. Type-safe means I can change an interface in project B and project A that depends on B now shows me errors in the editor exactly where I need to visit to fix it. If that isn't happening, it can't be called type-safe or generalized to that term. Currently it's a type-unsafe HTTP endpoint runtime type validator.


That’s good feedback! We can probably do both; if you think it’s cool would you be interested in contributing to TypeScript support to circle the square?


For sure! Do you pay money $$$? :)


Feel free to email keith@instant.dev if you’re legitimately interested :)


I find it mind boggling you don't understand that the solution provided here is about type-checks on data, i.e. data validation.

And you can use TypeScript all day but not be protected from missing or wrong data validation. No compiler in the world can help you there.


That's what I'm pointing out though? That it is runtime type validation and not type safety. They are two different things, the only common is the word "type". Please read the comment again.


Whilst nowhere near as robust as typescript, JSDoc offers a level of type checking through an editor such as vs code or intellj.


We tried for years to do things like this, and in the end an approach using typescript is way, way better than anything else I've seen attempted.

In my latest project[1], we set up the types for the API. The server and client are both bound by those types, meaning that one cannot change without the other. This fixes an entire class of bugs where backend code gets updated without the corresponding front-end code changing or vice-versa. It also has the nice side-effect that all of the possible return values (including errors) are nicely documented in one file, and that the errors are very consistent. On the frontend we have a "typed fetch" which returns typed results from the API.

We are also using this for typed access to localStorage, another source of many bugs in past projects, where random keys end up being scattered and causing nondeterministic behavior on different devices.

You can see how our API type system is implemented here:

- common types for both client and server: https://github.com/stoarca/presupplied/blob/master/images/ps...

- client's typed fetch: https://github.com/stoarca/presupplied/blob/master/images/ps...

- server's typed endpoint definitions: https://github.com/stoarca/presupplied/blob/master/images/ps...

[1]: We are working on https://app.presupplied.com, a digital home-schooling curriculum to teach 3-year-olds to read. Planning to expand to math, writing, and programming.


Our approach doesn’t preclude TypeScript! In fact I think it would integrate well with it. If you’re interested in a TS approach could you email me at keith@instant.dev ?


Both the name and intent seem highly inspired from FastAPI!


Interestingly I published a Ruby framework called FastAPI before Python’s FastAPI existed ;) — been thinking about this stuff for a long time.

https://github.com/thestorefront/FastAPI


[flagged]


If you’d like to help build a TS integration, welcome! What’s the MRR of your company? Did you found it or are you an employee? Any specific tips on how TS projects can significantly improve either revenue acquisition or retention over JS projects?


What stops you from using TS? It starts with JS and JSDoc as a starting point and then you can add TS.

Small projects maybe fine with JSDocs as a starting point.


> What stops you from using TS

I'm glad you asked

Nothing

I use it every day

:)


This comment encapsulates perfectly what's wrong about the ts talibans... Astonishing to see how their minds can't comprehend that businesses can and do thrive with js.


Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.

https://news.ycombinator.com/newsguidelines.html


I can definitely comprehend it, I have met some TS talibans and I'm not one of them

What I can't comprehend is wanting types and building a custom library instead of using the right tool

I also just don't get the resistance to TS that we see so often in public projects and npm libraries, it's not harder or any more complicated:

JS REPL: "node"

TS REPL: "npx ts-node"

Run JS file: "node file.js"

Run TS file: "npx ts-node file.ts"

Just do it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: