malwrar a day ago

I don’t understand the benefit of adding this intermediate component vs just writing frontend code to interact with the APIs you’d otherwise already be calling. An HTTP server with sessions placed between clients and internal systems makes sense and is standard, but the weird callback-style “remote control” system seems completely unnecessary and inefficient to me. I don’t think client logic being married to some backend vs the client itself makes any real guarantees about functionality being maintained either, it just adds a new location where code needs to be changed when something upstream changes.

  • diurnalist a day ago

    The article imo misuses the term BFF a bit, but perhaps its meaning has evolved over time. I was at SoundCloud when BFF was being introduced as an important piece of the microservice architecture--this post explains the purpose well[0]. BFFs can enable you to build more general-purpose and domain-specific services with few assumptions as to how they are used and their callers. BFFs then provide a composition layer you can use to, e.g., call one service to get a list of tracks, then call an authorization service w/ the list of track IDs to get geo-specific distribution rules for them, and compose that together in one materialized presentation view for the clients of the BFF.

    Eventually I think there was some work to move to GraphQL, which can solve some of the same problems. But GraphQL is a technology, and BFF is more of a pattern. There is a later reflection on that blog that makes this distinction, which I only read today.[1] It makes another observation that I kind of forgot about, because it was hiding right in front of my face as a worker there:

    "The defining characteristic of a BFF is that the API used by a client application is part of said application, owned by the same team that owns it, and it is not meant to be used by any other applications or clients."

    The ownership model is indeed a big deal. In practice, it helped in many ways to have a sort of intermediate layer between the client applications and the rest of the architecture. For example, in the SoundCloud web application, when you load a page of playlists, only the first 5 tracks in the playlist are visible to the end-user. So the web BFF application had special logic to only partially load all the track metadata past track 5, which had significant impact on scalability and latency, especially when rendering a lot of playlists that had lots of tracks!

    [0]: https://philcalcado.com/2015/09/18/the_back_end_for_front_en... [1]: https://philcalcado.com/2019/07/12/some_thoughts_graphql_bff...

    • antihero a day ago

      If you have lots of backend services, building a gateway API (aka BFF) using GraphQL is really nice. Your GQL handles composition of all the resources into exactly what the client needs for something, without having to define a REST endpoint for every single use case.

      On the frontend you compose the various schemas that each component needs (fragments) and can in one request pull exactly the data needed with one request to the gateway which will use the minimum required calls to the upstream services, and execute them in the most efficient order.

      • diurnalist a day ago

        This is very true! In practice I have seen that it is exceedingly difficult to write the GraphQL "sinks" (I don't recall the exact term) that can intelligently handle things like batching and understanding things like pre filtering, where one service in the composed call could and should be performed first to limit the result set. YMMV, in my experience it can be simpler to be more explicit about these things, especially when in "true" BFF the client team is also responsible for their immediate backend, which can give them that flexibility, at the cost, perhaps, of more boilerplate.

      • necovek a day ago

        > gateway API (aka BFF)

        A gateway API is an API that gates other APIs. A BFF is a gateway API used for a particular purpose, clearly identified by the name ("backend for frontend").

        Thus, they are not the same: one is a wider term, another is a focused implementation of that concept.

        • patmorgan23 a day ago

          And you can have multiple Backeds for Frontends different types of clients (i.e. browser, mobile)

  • abraae a day ago

    Looking forward to enlightening replies to your question, I don't get it either. Keycloak APIs are already hardened, how is security provably increased by creating a new, far less tested access layer for the browser to use?

  • TobbenTM a day ago

    I think the main attack vector they are trying to protect against is XSS attacks. If a malicious actor manages to inject client side code, there’s nothing preventing them from exfiltrating tokens and gaining persistent user access. This because there is no Secure Enclave to store tokens in in browsers. The bff pattern can solve this by using HTTP only cookies, keeping all session tokens on the server. For high security scenarios like banks and health it makes sense, but there are so many more attack vectors that it’s not gonna cover it all.

    • evrflx a day ago

      With an XSS exploit it is game over, you control the browser. Adding more complexity and opening up the possibility of CSRF exploits with BFF does not look like a good trade off to me.

      • TobbenTM a day ago

        You don’t open up for CSRF attacks if you use same site cookies, which I guess is part of why this pattern is seeing more use now.

steeeeeve a day ago

The BFF pattern is just "mostly microservices dedicated to a particular client type".

It makes sense when you have drastically different needs between a desktop client and a mobile client (or maybe for a kiosk client or POS interface)

Hosting a microservice is cheap, it avoids unnecessary workload on backend data stores, and teams can operate with more autonomy if they don't have to cooperatively update APIs in coordination with other groups with differing priorities.

This article really just reads like "I figured out how to do authentication with keycloak using OIDC"

  • brtkdotse a day ago

    > The BFF pattern is just "mostly microservices dedicated to a particular client type".

    If you’re building dedicated APIs for just one client you’ve come full circle and building a monolith with a bunch of extra step. Why not just build a good old web app at that point?

    • antihero a day ago

      No, you’re not. You’re building specialised tools for each job.

      Also with GQL you can separate by business domain as opposed to consumer for example your customer apps/website could use one gateway, and your back office app could use another.

      Also, if you have an old monolith in something like rails, and want to build new functionality in something different whilst migrating parts of it over, it makes sense to have a service oriented architecture. A gateway is a layer that exposes all of the services in a unified and consistent manner.

    • lmz a day ago

      Because some people like the native app experience better?

necovek a day ago

Formally proving that you can't be authenticated to a service without actually holding some type of authentication token is trivial.

Whatever you replace it with (a short-lived access token, a session cookie, a JWT, or whatever else) becomes the authentication token — if you add other properties (like short expiry), you are mostly trading convenience for security (eg. people will get logged out, or you'll need to implement refresh token dance in your app).

So it's really confusing to me how can someone pretend that there is something you can do on the unsafe client side, because you really can't (sure, there are obvious things you shouldn't do, like keep a plain text password in a cookie or localStorage, but an auth token is pretty much the same thing other than expiry).

  • horsawlarway 9 hours ago

    This is my take as well. Localstorage isn't the wild west anymore with regards to cross-site access (or even subdomain access) in modern browsers.

    His mission statement up front is just... wrong (and as an aside, I like KeyCloak, there's nothing wrong with it, I auth several services I host with it). But to be exceptionally clear - the following is bullshit:

    > Our goal? Design an application that never stores sensitive data in the browser.

    Why bullshit? He then goes ahead and stores a cookie in the browser which his BFF happily upgrades to an access_token and makes requests with.

    So his cookie is just his new token. Full fucking stop. Still sitting right there in the browser.

    Are there reasons to prefer a cookie over doing something like keeping an access_token? Yes, although they're not as convincing as one might hope. They're certainly not as convincing as this article makes them out to be.

    Generally speaking - the argument is that an HTTP-only cookie can't be read by javascript, preventing vulnerability to XSS. There is truth here that ex-filtrating an HTTP cookie is harder, but it doesn't remove any XSS vulnerabilities. All it does is add a marginal level of complexity: Instead of taking your token and doing stuff later, they just script the interaction they would have performed and put it in the XSS payload up front. It all runs on your client, which happily sends that cookie along for the ride. Marginally more complicated, absolutely doable, still completely hacked.

    ---

    IMO there are really only two reasons to prefer a BFF:

    1. Your client has needs that don't mirror the structure of the api you're accessing. Your bff can consolidate and simplify the api that the client has to consume, while also ensuring that you can handle breaking upstream api changes without needing to redeploy clients (the later is not a huge deal for websites, but IS a big deal for items that need to pass store review on redeploy - like mobile apps or browser extensions, and also for things that are technically complex to deploy - like enterprise desktop software)

    2. You can reduce the available footprint of the API beyond the default scopes of your access_token. This is the only real security win - period. If you build a client that never needs to touch billing endpoints in the API... don't ever expose them in the BFF. You can limit the blast radius of an attack to things the client should be doing, even if the token would have allowed other, more dangerous, things. This is usually a sign of poorly implemented token scopes, but if that's out of your control - this is a viable solution.

    In a lot of cases - the extra complexity just isn't worth it. Prefer using tooling that makes XSS difficult (React is genuinely pretty solid by default here, but you need to be careful with libraries and still enforce that you're not allowing dangerous calls). Prefer using SameSite and CORS with an HTTP-only cookie by default from your auth provider if you control the whole stack and can configure uri allowlists (esp if you're only authing across subdomains).

windlep a day ago

I've looked over the code, and some things seem a little odd to me.

The article starts by mentioning how insecure the browser is, apparently even cookies aren't secure. But then the API to talk to the BFF uses.... a server-side session tracked via a client cookie. If the BFF is holding the oauth credentials, then someone could steal the client cookie to make requests to the BFF to do whatever it can do.

It's not impossible to secure the browser from having credentials stolen from inside it, but it can be tricky to ensure that when the browser sends the credential in the request it doesn't leak somehow.

There's some irony as OAuth has DPoP now which can reduce the usefulness of stolen in-flight credentials but that can't be used in this BFF setup because the browser client needs the private key to sign the requests.

Properly securing the browser content on a login page, or the subdomain handling authentication credentials is definitely a challenge, and many don't like having to eliminate/audit any 3rd party JS they include on the page. I can see the appeal of a solution like this, but the trade-off isn't great.

yearesadpeople a day ago

BFF pattern is very much misunderstood, and very much over-used IMHO

Perhaps most useful - in highly distributed systems - I've seen is when we require some kind of flow orchestration, where we wouldn't like the orchestration logic at the API implementation (or indeed require the downstream services to not have to consider different contexts).

[edit] Quite useful when designing nice clean, dedicated, new APIs and having to deal with legacy systems (perhaps data pertaining to the shiney new API model is housed in a legacy model): a useful means to keep moving forward.

cjcampbell a day ago

I’m surprised that the author chose to configure a public OIDC client for this scenario. Part of the benefit of this pattern is that it’s possible to use a confidential client, since the BFF can securely hold the client secret.

lakomen a day ago

And I thought it's best friends forever! SCNR