Passwordless Auth to Servers: hands on with ASA

Rate this content
Bookmark

These days, you don't need a separate password for every website you log into. Yet thanks to tech debt and tradition, many DevOps professionals are still wrangling a host of SSH keys to access the servers where we sometimes need to be. With modern OAuth, a single login and second factor to prove your identity are enough to securely get you into every service that you're authorized to access. What if SSHing into servers was that easy? In this workshop, we'll use Okta's Advanced Server Access tool (formerly ScaleFT) to experience one way that the dream of sending SSH keys the way of the password has been realized.


- we'll discuss how ASA works and when it's the right tool for the job

- we'll walk through setting up a free trial Okta account to use ASA from, and configuring the ASA gateway and server on Linux servers

- we'll then SSH into our hosts with the ASA clients without needing to supply an SSH key from our laptops

- we'll review the audit logs of our SSH sessions to examine what commands were run

32 min
29 Mar, 2022

Comments

Sign in or register to post your comment.

Video Summary and Transcription

This Workshop introduces Okta's ASA tool for managing server access and explores the concepts of authentication and authorization. It discusses the challenges of managing SSH keys and the benefits of using one-time use keys. The Workshop also explains how Okta's ASA simplifies server access by reusing an identity provider. It provides a step-by-step guide for setting up and managing authorization and authentication using Okta's ASA. Finally, it covers the process of setting up servers and clients, as well as the cleanup after testing.

Available in Español

1. Introduction to Okta's ASA

Short description:

Hi there, everybody. I am Emily to humans or eDunham to computers. I'm an open source nerd and assistant from back when we were just starting to call ourselves DevOps. I work as a developer advocate at Okta. And I will be showing you an Okta tool today, but my opinions and any errors I might make are entirely my own. A lot of thought leadership and advances in DevOps focus on getting us away from running and managing servers. But a lot of us still do have to run servers. Maybe it's because we're working on migrating out of an old architecture that was state of the art when it was first designed. Or maybe we're dealing with workloads that just genuinely require us to more directly manage our compute. Either way, when we manage our own cloud or hardware resources, we can still factor out individual tasks that would be better to do with purpose built software or services. Today, I'd like to introduce you to a trick for factoring out one of the hard parts of managing who can access what and when and show you how to replicate that trick. The tool that does this cool trick, Okta's ASA, will not be the perfect fit for all access use cases. However, even if you never end up working with ASA directly, I hope that understanding its context will help you think critically about the strengths and weaknesses of your existing tools like conventional SSH. Knowing more about your tools and their alternatives can help you make the best security choices for each operational challenge that you find yourself up against.

Hi there, everybody. I am Emily to humans or eDunham to computers. I'm an open source nerd and assistant from back when we were just starting to call ourselves DevOps. I work as a developer advocate at Okta. And I will be showing you an Okta tool today, but my opinions and any errors I might make are entirely my own.

We've got about two hours and a split audience. Some people are here in the Zoom room, and others are watching this recording in the future. We'll record the first half with all these explanations so that future people can join us for it. But for the hands on half, toward the end, I'll stop the recording so that we can just chat. And you can ask the questions that you might not want immortalized for posterity. You don't have to watch the screen as you listen to this. But it will help to see the screenshots once I get to the setup overview.

Now, a lot of thought leadership and advances in DevOps, focus on getting us away from running and managing servers. That's often great. You can offload individual tasks to people or entities that specialize in them. Your database is run by database experts, your network tooling is managed by networking experts. And that's awesome when it's something you can do. When you have the resources to offload domain-specific work onto specialists in those domains, they will usually do it better than your team could. Just like how when you're coding, if you can use a well tested library of cryptographic primitives, instead of trying to roll your own, you'll almost always end up with more secure code as a result.

But a lot of us still do have to run servers. Maybe it's because we're working on migrating out of an old architecture that was state of the art when it was first designed. Or maybe we're dealing with workloads that just genuinely require us to more directly manage our compute. Either way, when we manage our own cloud or hardware resources, we can still factor out individual tasks that would be better to do with purpose built software or services. Today, I'd like to introduce you to a trick for factoring out one of the hard parts of managing who can access what and when and show you how to replicate that trick.

Now, as a person who listens to talks and uses tools also, I'll start with that disclaimer that I'm always listening for. The tool that does this cool trick, Okta's ASA, will not be the perfect fit for all access use cases. The biggest blocker you might encounter is that the pricing is per server, and it's most tested with the assumption that you're using Okta as an identity provider. The identity provider bit is relatively negotiable, but sadly the pricing isn't. However, even if you never end up working with ASA directly, I hope that understanding its context will help you think critically about the strengths and weaknesses of your existing tools like conventional SSH. Knowing more about your tools and their alternatives can help you make the best security choices for each operational challenge that you find yourself up against.

2. Understanding Authentication and Authorization

Short description:

Since we're talking about security, let's clarify what auth means. It expands in two ways: authentication and authorization. Authentication verifies if someone is the same person as before, while authorization determines if the authenticated person is allowed to perform a specific action. No authentication method is perfect, but adding multiple factors makes it harder for attackers to impersonate us. Authentication alone is not enough for authorization. Just like having an ID does not guarantee entry into a bar, being authenticated does not guarantee authorization. Authorization can change over time. Understanding authentication and authorization is crucial when managing access to resources.

Since we're talking about security, I'd like to make sure we're all on the same page about what auth even means. This will be old news to some of you, but everyone hears it for the first time at some point and people come into DevOps from all different backgrounds. So, the trick to auth is that it expands two different ways. First, it might mean authentication. Authentication answers the question, is this the same person as before? For instance, a driver's license or passport is often used for authentication to get into places like bars or airplanes. The document with my face on it shows that I'm probably the same person that the government issued that ID to. My username, password and access to my phone are a form of authentication for many online accounts. If I have them, I'm probably the same person who created the account. My cat's microchip is also a form of authentication. If she ran off and someone scanned her chip to figure out how to get her home, it would tell them that she's the same cat that got put into that database when I first took her to the vet.

So, the disclaimer here is that no form of authentication is perfect. Anywhere that software interfaces with the rest of the world, there's room for unexpected things to happen. If someone knew all of my secrets and had access to all of my devices and could use my biometrics, they could impersonate me. If someone could convincingly make their face look just like mine, they could pretend to be me using my photo ID. To paraphrase a great author, as technology advances, the technology for fooling that technology grows alongside it. It keeps things interesting. But what we can do is make it prohibitively inconvenient for someone else to impersonate us. Every time we add another factor of authentication, we make it harder for an attacker to pretend successfully that they are us. To get into my work accounts, you'd need to know my password, and you'd need to have my phone, and you'd need my biometrics all together. It's easy-ish for a password to be compromised, especially if someone reuses it. It's easy-ish for a phone to get stolen or a SIM card to get hijacked. But the more factors you add together, the more things would have to go right for an attacker to defeat your authentication mechanisms. So, that's authentication. It answers the question, are you the same person as the fool?

Now, authentication is necessary but not sufficient for authorization. Just like being the same person on my ID isn't enough to guarantee that I can get into a bar, I also need to be authorized by being of age. Having a passport isn't enough to let you get onto a plane, you also need to be authorized for that flight by having a ticket. Authorization can change from day to day or even minute to minute. If I had a ticket for this flight yesterday, that doesn't mean I can get on the same flight tomorrow without a new ticket. If I'm logged into my Google account and someone shares a document with me, I wasn't authorized to see it a moment ago, and I am now. So, authorization answers the question, is this authenticated person allowed to do the thing they're trying to? What has this got to do with accessing servers? Well, if you've ever managed access to resources as an Ops person, you've reasoned about both authentication of users and authorization.

3. Authentication and Authorization Mechanism

Short description:

If you create a key with a passphrase, then whoever knows the passphrase is probably the same person that created the key or at least someone they've delegated that to. But just having the key doesn't get you into the host. Somebody has to put the key on the host. When they put your public key on the host, they authorize anyone with your key to log into it for as long as that public key remains there. Managing all this at scale is prone to becoming a real adventure. With shared keys, onboarding is trivial. You just give them a copy of the key. But doing offboarding right is very expensive. What happens when somebody retires to a tropical island and you need them out of the system? Are you going to rotate the key for everybody who has it? Or are you just going to give up and boot them from the network, remove their access at another layer so that even if they could get through, they'd still have the key and be able to log in? With one key per user, offboarding is just a matter of removing the one key from every host that you added it to. But some of the burden shifts to onboarding. You've got to set up and maintain automation to push the new keys to all the hosts you're granting access to and none of the ones that you aren't. This has to run correctly and promptly every time you add a user and every time a user needs to change their key for any reason, and having somebody from the ops team follow a checklist or their memory to do the work of a CI job manually is still a form of automation. Scripts are scripts, even if you run every step by hand, running them by hand is just slower and more prone to errors and creates extra human suffering.

There is a familiar authentication and authorization mechanism. If you create a key with a passphrase, then whoever knows the passphrase is probably the same person that created the key or at least someone they've delegated that to. But just having the key doesn't get you into the host. Somebody has to put the key on the host. When they put your public key on the host, they authorize anyone with your key to log into it for as long as that public key remains there. So on the client, you have your private key. On the server, you have your public key. And when you try to log in, they do some math together to make sure that the two halves probably came from the same key. Well, they do math that would prove it would be prohibitively difficult to fake the key for now. Someday in the future, the keys we're currently using will get easier and easier to break as technology advances. Or sometimes you'll have a bunch of humans who share a key and they all log into the same account on a server. The server thinks there's only one client, which is sometimes okay. But will often lead to unintended consequences. Managing all this at scale is prone to becoming a real adventure. Although there are a few different rugs that you can sweep different parts of the problem under. With shared keys, onboarding is trivial. You just give them a copy of the key. But doing offboarding right is very expensive. What happens when somebody retires to a tropical island and you need them out of the system? Are you going to rotate the key for everybody who has it? Or are you just going to give up and boot them from the network, remove their access at another layer so that even if they could get through, they'd still have the key and be able to log in? Sometimes this is solved by just trying not to think about how many copies of that private key are wandering around in the wild in the files of former colleagues. With one key per user, offboarding is just a matter of removing the one key from every host that you added it to. But some of the burden shifts to onboarding. You've got to set up and maintain automation to push the new keys to all the hosts you're granting access to and none of the ones that you aren't. This has to run correctly and promptly every time you add a user and every time a user needs to change their key for any reason, and having somebody from the ops team follow a checklist or their memory to do the work of a CI job manually is still a form of automation. Scripts are scripts, even if you run every step by hand, running them by hand is just slower and more prone to errors and creates extra human suffering.

4. Managing Server Access with One-Time Use Keys

Short description:

Over the years, people have come up with better and worse solutions for managing the security headaches that surround authentication and authorization. Instead of having a single, long lived, valuable key, it turns out you can just make one-time use keys for SSH and RDP access to servers, which you as the user never actually have to touch or think about. Instead of asking every single host in the network who's allowed to access it, you can use the same source of truth about identity that all your other apps like chat and email are already using.

Over the years, people have come up with better and worse solutions for managing the security headaches that surround authentication and authorization. For instance, it's often appropriate to route traffic to internal hosts through a bastion, which is better hardened than it would be practical to do with all the servers in your deployment. And on that bastion, you can implement exactly the monitoring and logging that you need. Most deployments allow network...or apply network rules to prevent access between hosts, where no authorized person would want to connect, but an attacker might. For instance, the authorized people will probably never connect directly from app server A to app server B. Like, why would you do that? But because they will always be going in through a different route, like through the bastion. But an attacker who's compromised app server A, will really want to go straight to app server B. So, you can just forbid that access. These extra layers of security are usually great to have, but they don't necessarily obviate the entire underlying problem. For instance, if my permissions need to change on a host that has my key on it, somebody has to log in and change them. If I cease to be on a team that's authorized to access the host, some person or script needs to go take my public key off of it. This is the old familiar way of managing access, and it works pretty well for a lot of use cases. It has a certain simplicity and ubiquity and familiarity on its side. But the big drawback is that it makes your SSH keys and passphrases extremely valuable, dangerous to lose, and inconvenient to replace. They are literally the keys to your castle. And nobody wants to have to go rekey every single lock, whether that's physical or digital. However, when I was onboarding at Okta, I was introduced to a totally different yet surprisingly backwards compatible paradigm for managing server access. Instead of having a single, long lived, valuable key, it turns out you can just make one-time use keys for SSH and RDP access to servers, which you as the user never actually have to touch or think about. Instead of asking every single host in the network who's allowed to access it, you can use the same source of truth about identity that all your other apps like chat and email are already using. So, instead of asking Ops people to handle yet another set of crucial secrets, you can piggyback on to the authentication that they're already doing in their browser to access all of their other vital tools. The SSH and RDP protocols are still used because they're actually great. They're ubiquitous and well tested and well supported. We're only changing the details of how we manage the keys involved in order to reduce those keys value to attackers. So, I'll show you how it looks. It is super boring because fun and interesting are not words that I want to hear when we're talking about a workflow that lets us get access where we need it to end incidents and outages faster. So, this happens to be a pretty new laptop. I don't actually have anything in my SSH directory. I don't need any keys here. I've got config. I've set up a proxy in the config so that I can use my local SSH binary instead of a wrapper.

5. Understanding SSH and Zero Trust

Short description:

Just a proxy command there. I have access to the host DevOps JS, spun up as a toy for this workshop. No SSH keys. Now let's talk about what's happening under the hood. Back in 2015, engineers built a product called scale Ft for zero trust server access. It was acquired by Okta in 2018 and renamed ASA. Zero trust means authentication and authorization are rechecked on every attempt. Traditional SSH key management has vulnerabilities, but with zero trust authentication, hosts don't keep track of access permissions.

Just a proxy command there. And then I've got known host so I won't have to be like, why, yes, I do trust this server. So, I'll ask this tool what I have access to. And I have access to the host DevOps JS. I spun it up as a toy we can play with during this workshop. No SSH keys. I'm just gonna SSH into it. And I meant it. Right. I don't have a .ssh directory. I don't have any keys on this host. And so, nobody can, like, grab my key and pretend to be me. Super simple. Super boring. Boring is sometimes good.

So, now that I've made SSH way more boring on the surface than it ever used to be, let's talk about what's happening under the hood. So, when you touch the software that lets us do all this, the first thing you'll notice is that it has its history baked into its naming conventions. Back in 2015, some engineers built a product for zero trust server access called scale Ft. They were bought by Okta in 2018 and have been refining their technology and its integration with Okta as an identity provider ever since. In this acquisition, the scale Ft product was renamed ASA for advanced server access. So, ASA and SFT are, from a user perspective, different names for the same thing. You might notice that I snuck a buzzword into that explanation. Zero trust. This just means that a user's authentication and authorization are rechecked on more or less every attempt to do something. When you just recently authenticated, we have a pretty high level of confidence that you are the person whose creds you're using. But the longer it's been since you last authenticated, the more, the less confident that we can be that the person using your credentials is really their owner. Maybe someone snatched your laptop or intercepted some network traffic or you stepped away for coffee and the cat walked across your keyboard. With traditional SSH key management, it's usually been a very long time since your key was added to a host. That's a lot of opportunities for someone else to intercept it and what gives traditional keys such a high value. With zero trust authentication, however, we don't ask the hosts to keep track of who's allowed to access them.

6. Using Okta's ASA for Server Access

Short description:

The identity provider takes care of knowing who the users and servers are, as well as who has access rights. By reusing an identity provider, accessing servers becomes as simple as logging into any other app. We'll walk through the setup quickly, but it's not necessary to follow along right now. We'll explain three levels of difficulty for getting hands-on with ASA and provide a demo organization and server for easy options. To get started, sign up for a free trial Okta account and set up a second factor for admin access. Then, add the Advanced Server Access app to your trial account.

Instead, the identity provider worries about knowing who the users are, who the servers are, and who's allowed to do what and where and when. The naming convention here is that we refer to the users as clients, the servers as agents, and the central source of truth about what's allowed and who's who as the identity provider. Clients aren't always humans. They might be other automation in your infrastructure. Agents might not always be servers. They can be almost anything you could SSH into.

Now, the super convenient part of this whole process is that as the user, I have less to think about than I would if I were managing my own keys. By reusing an identity provider where the organization was already centralizing access to other apps like chat and email, accessing servers becomes like logging into anything else. So our hands on part of this lab is going to feel a little contrived because we are starting from zero. We're not assuming that you have the right permissions to play like this in an existing Okta account. Just like writing a demo app with a full featured web framework will require more boilerplate and dependencies than a similar demo app with a framework that doesn't scale as well, this setup is total overkill for the tasks that we'll be doing in the demo.

I'll run through how the setup works pretty quickly now because people watching the recording later will be able to pause it if they want to follow along. Friends in the room, I do not recommend attempting to follow every step right this minute. After we stop the recording, I'll explain three levels of difficulty for getting hands-on with ASA and walk you through each while answering questions and helping debug any issues you might encounter. For the easy options, I'll let you use my demo organization and the toy server that I just showed you a minute ago. But I'll be spinning those down after the conference, so they won't be useful to people in the future.

To play with ASA through Okta, we're first going to need an Okta account to do it with. We'll just spin up a free trial account. To avoid any surprises, be aware that support is really enthusiastic about getting in touch and helping resolve any issues you might have. If that affects your choice of how you provide your email address or even if you decide to sign up, just be informed. You hop on over to okta.com. You pick up an account, you get your activation email, and then you log in and change your password. Set up a second factor when it prompts you to because you are going to need a second factor when you pseudo up in that account when you become an admin there, when you access the admin panel. So in that okta account, we're going to hop into the admin panel to become an administrator. Here it will force you to have a second factor because we can't just have admins running around with only password off. And we can tell that we're admins because you can see admin up in the url as well as the console looking different. So we'll go to applications in the sidebar and applications under that and we'll browse the app catalog. We'll just do a quick search for advanced server access and we'll add that app to our trial account. So no need to change any settings here it just does the right thing out of the box. And then Okta needs to know who's allowed to use ASA before it will let anyone use it.

7. Managing Authorization and Authentication

Short description:

To authorize access, assign the user and choose a username in ASA. If using Okta groups, add the group representing the Okta team. Keep the ASA tab open and prepare to edit the settings. Open app.scaleft.com to create a new team. The handshake between Okta and ScaleFT is the only tricky part. In Okta, edit the settings, while in ScaleFT ASA, input and copy settings. Pick a globally unique team name. Transfer information between the apps using the provided URLs. Save in Okta and authenticate in ScaleFT. Ensure user permissions and correct URLs to avoid errors. This process only happens once per org.

This is the authorization step. To do the minimum viable demo, I'll assign my user to it and pick what username ASA will give me. Although, if I was in an Okta instance with established groups, I'd add people by groups. I'd add the group that represents the Okta team. You can always go back later and change it so that you're giving permissions via groups rather than via individual users.

So keep that ASA tab open under sign-on tab. You'll have the settings, you're going to prepare to edit those settings. And you're also going to open app.scaleft.com and start creating a new team. Now, here's the only really tricky part of the whole process. We need the two applications to basically shake hands. Okta needs to know some things about ScaleFT, and ScaleFT needs to know some things about the ASA app in Okta. So, this handshake is basically the only hard part.

So, in your Okta tab, you're seeing something that looks like this. Editing your settings. In your ScaleFT ASA tab, you're seeing something like the thing on the right where you're putting in some settings, copying out some settings. You will be picking a team name here. I highly recommend picking, like, your name DevOps JS, your name testing. Something like that. Because the team names are globally unique. Kind of like S3 bucket naming. So, to transfer information between the two apps, that URL named identity provider metadata, that URL goes in the IDP metadata URL box. And then the base URL and audience restrictions get transferred back the other way. First, you save in Okta. And then, once you've done everything up to this point, you hit the authenticate button in scale FT. If you encounter an error when you try to authenticate, you may have forgotten to add your user and give your user permissions. You may have failed to put the base URL and audience restriction into Okta or maybe you have the wrong or an invalid value in the IDP metadata URL. It's one of those things where following the steps precisely matters right there. So that was the hardest part. Congratulations. That happens exactly once per org.

8. Setting Up Projects and Authorizations

Short description:

Your environments like dev and prod and staging will be represented as separate projects within that same team. Now if we want changes in Okta group membership and user permissions to automatically push to ASA, we have one more little link to make. Under the provisioning tab in Okta, we'll set up the API integration. That's the system for cross domain identity management. Finally, we need to authorize people to do things. For demonstration purposes, I will add the group Everyone to the project. Now let's get a server. We could do the setup once on the image we plan to deploy, and then just clone it a bunch of times. Or if we had an AWS or GCP account, we could configure it to automatically enroll all new servers in ASA. However, the minimum viable demonstration is to play with token enrollment.

Your environments like dev and prod and staging will be represented as separate projects within that same team. So you'll never have to do that setup ever again as you scale.

Now if we want changes in Okta group membership and user permissions and stuff to automatically push to ASA, we have one more little link to make. If you're just playing with this by yourself, you can skip this part, but it's really handy if you want to play with syncing teams back and forth. Under the provisioning tab in Okta, we'll set up the API integration by hitting that button. It'll open ASA over here. We'll approve granting the permissions and give the service user some service username that will recognize. We'll approve it and then we'll save it in Okta. That's the system for cross domain identity management, and then you will be able to sync things back and forth.

So the final little bits of setup to get ready to enroll servers are we're going to need to create a project on the ASA side. You might want to use projects to split like Dev, Staging, and Prod. Later on I will offer to create individual projects for those who would like to enroll a non-shared toy server, and I've just created the project DevOpsJS to play with today. So there's a bunch of options in the Create Project dialog. They're useful in real deployments. Don't worry about the options. All you care about to play with it, to get started, is having the name. So finally we need to authorize people to do things. For demonstration purposes, I will add the group Everyone to the project. Later on you can create more groups and sync them to manage finer grained permissions, and later on as you play with it, you can play with what we call pseudo entitlements that will let certain users or groups only use sudo for particular commands. So we've sorted out the identity provider and that was the hard part. That's also the part that you only do once regardless of deployment size. Now let's get a server. If we were doing this for a whole bunch of servers, we could do the setup once on the image that we plan to deploy, and then just clone it a bunch of times. And those clones will register themselves to ASA, and it'll be like, oh you were deployed from that image. Cool. And create the host in itself. Or if we had an AWS or GCP account, we could configure it to automatically enroll all new servers in ASA even without the token. However, the minimum viable demonstration is to play with token enrollment because it's the simplest for a one-off. So under enrollment right here, in my ASA project, I'll create an enrollment token. And I'll copy that enrollment token.

9. Setting Up Server and Client

Short description:

To set up the server, place the token and install the ASA agent. The token is placed in a specific file, and when the agent is installed, it registers with the identity provider. Starting the service will make it appear in the identity provider. In the real world, automation would handle this process. For server setup, make sure to read the setup notes for your distro. On new-ish Ubuntu versions, an extra step is required in the SSH config. Once SFTD is running and registered, the server is ready. The client setup is even easier.

Now, the only tricky bit about getting the server setup is first you place the token, then you install SFT. I'll stick the token on Linux into var lib sftd enrollment.token. And on a clean Linux host, I'm also going to need to create the directory var lib sftd first, because I haven't installed the tools yet. On Windows, it would be under c, Windows system 32, config system profile, app data, local scaleft enrollment.token. So your token goes in that file, nothing else in that file, just paste the thing that you copied from the web interface. And then when you install the appropriate ASA agent for whatever you're running, it will see the token on startup, register itself with the identity provider, and just work. So you start the service, and it shows up in your identity provider. They contact one another behind the scenes, verify the token, and the host appears. So in the real world, we would have our automation do that whole thing. But this is, of course, the sandbox, you've heard about C1, do one, teach one as a way to solidify knowledge. And automating a task falls solidly under that teach one category. So you get to see one now, you get to do one as you play with it. And then you can teach one to your automation if you end up using this tool in the real world. So the only trick with server setup, make sure you read all the notes in the setup page for your distro. There's one extra step on new-ish Ubuntu versions, where you drop a line into your SSH config, saying, hey, yeah, we actually do use the RSA algorithm, like RSA is allowed. And when SFTD is running, and it has registered itself with the identity provider, you've got a server. I told you it gets easier, and it's going to get easier still for the client.

10. Setting Up Client and Cleanup

Short description:

To set up a client, install the client on your OS and enroll it. You can install it on your laptop or in a container. It uses a web browser for authentication. If running headless, a URL is provided for authentication. Once installed, SFT enroll and choose a team. The setup centralizes access management, simplifies scaling, and allows reuse of existing identity provider. Play with features like generating proxy jump commands, using a bastion for session logging, and revoking user permissions. Clean up after testing by uninstalling clients, removing containers, and spinning down servers. Trial accounts will timeout after 30 days, but can be deactivated upon request. Thank you for participating and feel free to reach out with any questions.

So, finally, two steps to setting up a client. Installing the client on whatever OS you're being a client from, and enrolling it. So, installation, you can install it just on your laptop, or you can install it in a container. It'll want to use a web browser to do its authentication to prove that you're you in this project with the identity provider. But if you're running headless, such as in a Docker container, it'll give you a URL that you just copy over and open in your browser. So, whatever workflow you prefer for that will be fine. You'll get it installed, and then you'll SFT enroll. That will prompt you and ask you what team you want to be in your browser, and then you will authenticate. Now, if you are already off in your browser, it can reuse that depending on the settings that your identity provider administrator has given for how frequently they want to force people to reauthenticate. My settings on the demo org that we'll be playing with later are extremely lenient, 24 hour timeouts, which is the maximum and so forth because nothing of security importance should be happening in this org. So, there you have it. Instead of asking each host to know who's allowed to access it, and accessing, or asking each user to jealously guard a single pair of secrets that would let any attacker impersonate them, we've centralized all that work into an identity provider that the company was probably already using some version of if you have things like email. It is overkill to do it for a single host but once you have your first host and your first client enrolled, you can see how simple it is to scale up to more hosts and more clients. If you were using the identity provider for anything else, you can re-use what it knows about people's identity instead of reinventing that wheel. So once you have the setup, I'd recommend playing around with whatever features you find interesting. You can run SFT SSH config to generate a proxy jump command so that you can use your local SSH binary instead of the SFT wrapper like I did when I demonstrated it in the delightfully boring demo. You can also use a bastion to log all SSH and RDP sessions. So this creates logs instead of session recordings, which is nice because text is searchable. Logs are great not only for dealing with attackers, but also for doing forensics on your own work. They can answer the question of how did I or my teammate fix this thing last time it happened? What troubleshooting steps would our architect have taken when they're off on vacation? You can see when was the last time they did this task? What did they do? So you might check a new user into a group with permissions and log in as that new user without waiting for any other automation to run because the host will be like, hey, identity provider. And the identity provider will be like, hey, host, let this one person in with this one key that we're just using this one time. And it's pretty seamless there. Or you can revoke a user's permissions in your identity provider and then try to log in as them and watch it not work. Now, as you play with this, as you find out the things that it can do, play around with it, eventually you'll get done testing it. And I always recommend cleaning up when you're done playing around with some tool like this. So to clean up after this exercise, you might want to uninstall any client that you installed on your laptop or remove or archive any container that you installed a bunch of stuff in. You'll probably want to spin down any servers you may have created for the exercise. You can just ignore the trial account and it will go away. It will timeout after 30 days and no longer be an accessible account. But if you would like to it to be fully deactivated, you can email support to do that. And if you're getting more emails than you'd like, you might want to use the unsubscribe button. So that would finish the exercise to the end. So to those who are watching on the recording later, thank you for listening and I hope that you learned something useful. Be warned that the further away you are from March of 2022, the less likely it is that these steps will work for you the same way that they worked for me because software is always changing. Feel free to reach out to me with any questions that you have.

Watch more workshops on topic

React Summit 2023React Summit 2023
56 min
0 to Auth in an hour with ReactJS
WorkshopFree
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool. There are multiple alternatives that are much better than passwords to identify and authenticate your users - including SSO, SAML, OAuth, Magic Links, One-Time Passwords, and Authenticator Apps.
While addressing security aspects and avoiding common pitfalls, we will enhance a full-stack JS application (Node.js backend + React frontend) to authenticate users with OAuth (social login) and One Time Passwords (email), including:- User authentication - Managing user interactions, returning session / refresh JWTs- Session management and validation - Storing the session securely for subsequent client requests, validating / refreshing sessions- Basic Authorization - extracting and validating claims from the session token JWT and handling authorization in backend flows
At the end of the workshop, we will also touch other approaches of authentication implementation with Descope - using frontend or backend SDKs.
DevOps.js Conf 2022DevOps.js Conf 2022
152 min
MERN Stack Application Deployment in Kubernetes
Workshop
Deploying and managing JavaScript applications in Kubernetes can get tricky. Especially when a database also has to be part of the deployment. MongoDB Atlas has made developers' lives much easier, however, how do you take a SaaS product and integrate it with your existing Kubernetes cluster? This is where the MongoDB Atlas Operator comes into play. In this workshop, the attendees will learn about how to create a MERN (MongoDB, Express, React, Node.js) application locally, and how to deploy everything into a Kubernetes cluster with the Atlas Operator.
React Summit 2023React Summit 2023
88 min
Deploying React Native Apps in the Cloud
WorkshopFree
Deploying React Native apps manually on a local machine can be complex. The differences between Android and iOS require developers to use specific tools and processes for each platform, including hardware requirements for iOS. Manual deployments also make it difficult to manage signing credentials, environment configurations, track releases, and to collaborate as a team.
Appflow is the cloud mobile DevOps platform built by Ionic. Using a service like Appflow to build React Native apps not only provides access to powerful computing resources, it can simplify the deployment process by providing a centralized environment for managing and distributing your app to multiple platforms. This can save time and resources, enable collaboration, as well as improve the overall reliability and scalability of an app.
In this workshop, you’ll deploy a React Native application for delivery to Android and iOS test devices using Appflow. You’ll also learn the steps for publishing to Google Play and Apple App Stores. No previous experience with deploying native applications is required, and you’ll come away with a deeper understanding of the mobile deployment process and best practices for how to use a cloud mobile DevOps platform to ship quickly at scale.
DevOps.js Conf 2022DevOps.js Conf 2022
13 min
Azure Static Web Apps (SWA) with Azure DevOps
WorkshopFree
Azure Static Web Apps were launched earlier in 2021, and out of the box, they could integrate your existing repository and deploy your Static Web App from Azure DevOps. This workshop demonstrates how to publish an Azure Static Web App with Azure DevOps.
DevOps.js Conf 2022DevOps.js Conf 2022
163 min
How to develop, build, and deploy Node.js microservices with Pulumi and Azure DevOps
Workshop
The workshop gives a practical perspective of key principles needed to develop, build, and maintain a set of microservices in the Node.js stack. It covers specifics of creating isolated TypeScript services using the monorepo approach with lerna and yarn workspaces. The workshop includes an overview and a live exercise to create cloud environment with Pulumi framework and Azure services. The sessions fits the best developers who want to learn and practice build and deploy techniques using Azure stack and Pulumi for Node.js.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Top Content
Do you know what’s really going on in your node_modules folder? Software supply chain attacks have exploded over the past 12 months and they’re only accelerating in 2022 and beyond. We’ll dive into examples of recent supply chain attacks and what concrete steps you can take to protect your team from this emerging threat.
You can check the slides for Feross' talk here.
React Advanced Conference 2021React Advanced Conference 2021
19 min
Automating All the Code & Testing Things with GitHub Actions
Top Content
Code tasks like linting and testing are critical pieces of a developer’s workflow that help keep us sane like preventing syntax or style issues and hardening our core business logic. We’ll talk about how we can use GitHub Actions to automate these tasks and help keep our projects running smoothly.
DevOps.js Conf 2022DevOps.js Conf 2022
33 min
Fine-tuning DevOps for People over Perfection
Demand for DevOps has increased in recent years as more organizations adopt cloud native technologies. Complexity has also increased and a "zero to hero" mentality leaves many people chasing perfection and FOMO. This session focusses instead on why maybe we shouldn't adopt a technology practice and how sometimes teams can achieve the same results prioritizing people over ops automation & controls. Let's look at amounts of and fine-tuning everything as code, pull requests, DevSecOps, Monitoring and more to prioritize developer well-being over optimization perfection. It can be a valid decision to deploy less and sleep better. And finally we'll examine how manual practice and discipline can be the key to superb products and experiences.
DevOps.js Conf 2022DevOps.js Conf 2022
27 min
Why is CI so Damn Slow?
We've all asked ourselves this while waiting an eternity for our CI job to finish. Slow CI not only wrecks developer productivity breaking our focus, it costs money in cloud computing fees, and wastes enormous amounts of electricity. Let’s take a dive into why this is the case and how we can solve it with better, faster tools.
DevOps.js Conf 2022DevOps.js Conf 2022
31 min
The Zen of Yarn
In the past years Yarn took a spot as one of the most common tools used to develop JavaScript projects, in no small part thanks to an opinionated set of guiding principles. But what are they? How do they apply to Yarn in practice? And just as important: how do they benefit you and your projects?
In this talk we won't dive into benchmarks or feature sets: instead, you'll learn how we approach Yarn’s development, how we explore new paths, how we keep our codebase healthy, and generally why we think Yarn will remain firmly set in our ecosystem for the years to come.