JavaScript Beats Cancer

Rate this content
Bookmark

Skin cancer is a serious problem worldwide but luckily treatment in the early stage can lead to recovery. JavaScript together with a machine learning model can help Medical Doctors increase the accuracy in melanoma detection. During the presentation, we show how to use Tensorflow.js, Keras and React Native to build a solution that can recognize skin moles and detect if they are a melanoma or a benign mole. We also show issues that we have faced during development. As a summary, we present the pros and cons of JavaScript used for machine learning projects.

25 min
20 Jun, 2022

Video Summary and Transcription

This Talk discusses using JavaScript to combat skin cancer, with a focus on machine learning integration. The speaker has experience in medical imaging and has partnered with dermatoscopy companies to develop hardware. JavaScript libraries like TensorFlow.js and Pandas.js are used for model deployment and data analysis. The Talk also covers building neural networks, analyzing skin cancer using scoring methods and image processing techniques, and extracting asymmetry in skin images using Python and JavaScript.

1. Introduction to JavaScript and Skin Cancer

Short description:

Hi, my name is Karel Prystalski and I will tell you more today about how to use JavaScript to beat skin cancer. I have 15 years of experience in machine learning and specifically in medical imaging. I decided to cover this topic and build solutions in this area because of the increasing importance of skin cancer, especially in countries like Germany, Scandinavia, the US, and Australia. I have also partnered with dermatoscopy companies to develop hardware, such as the dermatoscope, which is used by dermatologists. My solution combines the dermatoscope with special lenses and light to capture high-quality images of skin moles.

Hi, my name is Karel Prystalski, and I will tell you more today about how to use JavaScript to beat skin cancer. My experience is about 15 years in machine learning. So my background is machine learning, it's computer science, I did a PhD degree in artificial intelligence, how to use it in medical imaging and dermatoscopy as well.

You can find some of my papers, research papers in this topic on Google Scholar, for example. So feel free here is one of the articles that I have published, actually it is around five years ago, about analysing of skin cancer on multispectral images. Actually, in that case I use Python, but because of the, well, became more and more popular in the recent years, and also the usage of JavaScript specifically for this topic, I decided to also, well, prepare a presentation and also a solution app for skin cancer analysis.

So my background is not only scientific, I also have founded in 2010 so 12 years ago, a company, a service company working for fortune 500 companies, building also data science, machine learning solutions. And yeah, before that I had I did also some some, you know, some other commercial work, for example, at IBM. So, as I said, I have 15 years of experience in machine learning and specifically in medical imaging, I mean, in applications in medical imaging.

So, how, why I why I decided to actually cover this topic and to build some solutions in this area? Well, as you can see, I don't I'm not really in the risk group when it comes to skin cancer because, you know, the biggest group of of the risk group is actually the blond people with blue eyes. So, this is the phototype number one with the highest risk of having skin cancer, especially if you're becoming kind of your skin doesn't doesn't isn't well it doesn't become brown when you're exposed to the sun but actually it's more going in the direction of red, and actually also, the risk of actually getting skin cancer is high in this group.

So, the darker the skin is and how it reacts to the sun, the lower the the probability is to get a skin cancer. So, there are six type phototypes of skin. I'm more or less in the third group because of my color, hair color, eye color, and so on. That's why the biggest problem actually, it is the biggest, the countries like Germany, Scandinavia, and the Nordic countries, the US, Australia, especially Australia, this is actually where this problem is even more and more important. In the meantime, I also have done some partnership with some dermatoscopy companies, I mean companies who actually develop the hardware. So yeah, as you can see here, here's one of the device. This is our dermatoscope here. That's something, that is a device that is actually used by the dermatologists. In this case, I have also used an iPhone here on the front because this is actually an extension. So it's not a typical dermatoscope, usually it doesn't come with an iPhone or any kind of mobile phone. It comes alone, it's a standard on the device. Some dermatologists use also this kind of extension case just to take the pictures in an easier way. And obviously it's quite small, so we can take it to your pocket and actually visit even your patient to take a look on the mole like this. So this is how actually my solution is used and it is combined with the special lenses, special light to get the best possible image of the skin mole. When comes to the data set because any kind of machine learning topic, model should be fed by some data. Now when I started my research I actually started with 50, 53 images or less. So as you can imagine, that's not a big enough data set to do any kind of research. So what I did is I met, I guess, almost every company in public or private that do anything with dermatology in the city where I live, in Krakow, in Poland. Most of them actually declined to collaborate and actually build some models.

2. Machine Learning and JavaScript Integration

Short description:

Machine learning became a buzzword and the hype around AI has dramatically increased. Obtaining data sets for research has become easier, allowing the development of algorithms for various skin illnesses. Code samples and a Docker image with JavaScript libraries are available for download. The architecture involves combining machine learning with JavaScript to build a mobile app.

It was in 2007, 2008. So the way how people thought about machine learning was totally different than actually compared to what's happening now. Actually, machine learning is AI became a buzzword and everyone wanted to do AI. In the past, I mean, 15 years ago when I said AI, most people said, oh, no, thank you. I'm not interested. Now it's totally opposite. I need to explain people why not to use machine learning rather than actually use machine learning. So it changed dramatically in the COVID pandemic, even increased the hype on AI.

So when I reached to the companies, I obtained a data set of around 5,000 images. Now you can easily download a data set of about 26,000 images of skin moles. It's available on the ISICarchive.com website, and you can use it for your research. So now it's even easier to develop algorithms to find different kind of skin illnesses, not only cancer, which is technically, I mean, it's not the most popular. That's good illness when it comes to skin.

So for all of you that want to use of my code samples that I have prepared for you, you can always download my Docker image that contains a JupyterLab, JupyterHub, together with some kernels for JavaScript as well, and also some libraries, JavaScript libraries. It's a bit old because I am doing it for many years already. So it might be that I will update it soon, but it's still working. So you can easily use it with the notebooks that I will show you next. So the architecture, how I started to actually use machine learning, well, how I combined machine learning with JavaScript, because of this device, I decided, obviously, to use one of the JavaScript solutions to build a mobile app because the mobile devices are changing every year.

3. JavaScript Integration and Model Deployment

Short description:

I decided to use JavaScript instead of building a native application. For the machine learning part, I used TensorFlow.js because it's the most robust library when it comes to JavaScript. The model is trained in Python, but it is used with JavaScript. JavaScript is used to load, use, retrain, and deploy the model on mobile phones. It is also used together with Kubeflow and TensorFlow servers. There is a web app that connects to the mobile app for retraining and finding similar models. A JavaScript library called Pandas.js is available, which is a fork of the original Pandas library from Python.

As you can see, this is an iPhone 6S, so quite old, and probably I will need to change it also soon to a newer one. Still, it has a good, good, it's able, I'm able still to use this phone to make good quality pictures because the quality is here, not here, right? So it's not in the phone, but it's actually in the lens here. So that's why I decided to use JavaScript instead of building a native application.

And in the past, I had to use some different kind of solution, starting from Cordova PhoneGap, now I'm actually working on React Native. And for the machine learning part, I used TensorFlow.js, so you might say, okay, why TensorFlow.js, and not, I don't know, Keras, or actually Torch, for example. Well, there is one reason why I use TensorFlow.js, because it's the most robust library, when it comes to JavaScript, obviously. So why I use, or why I choose JavaScript in this case, that's not exactly that I said, okay, let's do everything in JavaScript, and that's not true.

I mean, the true is that actually the model is trained in Python, but it is used with JavaScript. So I use TensorFlow.js not really for the training, and to be honest, I don't know anyone who actually do that. I mean, maybe because I'm into the data science field for a long time, and I know many people in this area, and they actually do mostly in Python, or I don't know, maybe Scala, some of them, especially related to big data. So in this case, in this specific case, I use JavaScript exactly because of the possibility, actually, to use TensorFlow.js to load the model, use the model, retrain the model, and use it on our phone, mobile phone. So in production, it is used together with some Kubeflow and TensorFlow servers. I mean, there are two models, one that you see on the left. Actually, you can combine it together with the main app. And that's how I did it here on the phone. But actually, in many cases, it also caused that service that actually is trying to find similar models, similar visions. And actually it is a web app. So the app is actually connecting to the web app and then also use that for retraining as well.

How does it look like? Let me just shortly move to some examples. Here we go. Here we go. That's partially, I did it also in the past for another conference at Hockey for Nukraine. I will just use part of the part of the notebook. You can easily find it on my GitHub account on my company's GitHub account. And a few repositories that are about machine learning, AI and JavaScript. When you start to do any kind of research related to data, when you do it in Python, you might use probably the first library that you think about is Pandas. And there is a fork of this library in JavaScript. JavaScript is called Pandas.js, obviously. To some degree, it's very similar to Pandas, to the original one from Python. But actually, it's quite limited compared to the original Pandas.

4. JavaScript Libraries and TensorFlow

Short description:

Many advanced features from the original Pandas are not yet implemented in JavaScript. However, there are several other libraries available for data analysis and manipulation, such as DataFrame.js, Reclaim, and DataForge. JavaScript also offers better visualization libraries compared to Python, including MATLAB and Seaborn. When it comes to machine learning, Scikit-Learn is the popular choice in Python, but in JavaScript, there is no comprehensive library like Scikit-Learn. TensorFlow, on the other hand, provides easy implementation of linear regression and offers various APIs in different programming languages.

Still, many, many features are not yet implemented. So to some degree, you can use it in JavaScript. But still, there are a lack of many, many advanced features from the original Pandas, especially the one that actually are already just statistics.

Another library that you can find, or you can use in JavaScript for data analysis or data manipulation, DataFrame.js, Reclaim, DataForge, and so on. So there are plenty of such libraries to use. This is just a few examples, how to work with series, how to use with DataFrames. That's something normal for people that work on data, probably for JavaScript engineers, not so typical, but still easy to manipulate, easy to export to JSON, for example.

And so this is just a notebook on how to use that, just to show a few examples. When it comes to the visualization, in my opinion, JavaScript do a better job than Python. Obviously, there are some libraries like MATLAB, Seaborn, and some other ones and in Python, when you compare it to the one that actually are available in JavaScript. I think here, JavaScript has a huge advantage because there are good libraries for visualization for printing the charts. But I think in many cases, are just doing better jobs than the Python ones. So, that's good. So one, one, let's say.

And then, when it comes to Scikit-Learn, the most popular library for machine learning for the shallow methods. So, tensorflow, I mean, that's for building neural networks, but actually Scikit-Learn is more about the shallow methods. It's actually most of the cases that you want to deal with when it comes to machine learning can be solved or should be solved using shallow methods. So, Scikit-Learn, that's the first library we will use and easy to use in Python. In JavaScript, you have JS.kit.learn and Scikit-Learn as a fork of Scikit-Learn, but well, not updated in the recent years. So, it looks like someone did that, started something, but well, then dropped the maintenance and they are gone. I mean, not really a good library here to use. There are plenty of libraries that actually implement some specific shallow method, like I don't know, SVM, KNN, and then so on. But actually, if you want to have it in one place like it is in Python, then you don't have such a library in JavaScript, unfortunately.

So, when it comes to TensorFlow, here's a very easy example of how actually you can build a linear regression. Here are some examples how to import, how to use, how to implement a linear regression. And here's an example, here's also an image from the documentation of TensorFlow.js. You have many kinds of APIs. The most popular is in, written in Python, but actually back is written in C++, and you have also some other forks like Java, Go, JavaScript, and so on. So JavaScript is not, I mean, it's not special here. It's just another fork to get to the core of TensorFlow, really.

5. Building Neural Networks and Analyzing Skin Cancer

Short description:

Here's an example of how to build a loss function, combine it, use it, import it, and train it. In Python, we typically use cameras to build neural networks. JavaScript has Keras JS, but it's not as developed as the Python version. When it comes to skin cancer, doctors use scoring methods like ABCD and check for patterns like asymmetry, border sharpness, and number of colors. Using different wavelengths of light, such as infrared and UV, can provide more detailed information. Image processing techniques like binarization and fractal analysis can also be used.

Here again, example of how to build a loss function, combine it together, how to use it, import it and train it. When it comes to build, when it comes to neural networks, how we do it actually in Python, so mostly we do it with cameras. I don't have PyTorch. Actually, you can easily combine the network. I mean, build it from a blocks. I actually have layers. Connect the layers together and you can easily build a very huge network shortly. In JavaScript, you have Keras JS, but to be honest, it's far, far away from the one that is developed in Python, unfortunately, right? Okay. That's the one thing.

Let's just move back to the slides. When it comes to the skin cancer, as you can see here, you have three images and only two of them actually are cancers. The one on the left and one on the right. Here on the left, you can see some patterns, I mean, the dots here and the stripes. Some vessels here visible that all of that is not very symmetric. The borders are quite smooth really. Here on the right, you have this kind of a pattern that is called the blue wave pattern. You can see kind of a white-blue colors here. It means that actually, well, it's going deep. I mean, the cancer is actually going deeper into the skin and actually trying to get to the vessels. That's bad for the patient, but that's also a pattern that actually tell us, oh, that's really bad. Here in the middle, you have an example of a suspicious care role, but not confirmed to be cancer because all of them actually are pathology confirmed or not. I mean, confirmed that they are cancer or not. So how do medical doctors do that? They use some kind of a scoring methods like ABCD, seven-point checklist, seven-point score, hunter's call, three-point checklist, and so on, and so on. So they have, they use some panels, like asymmetry, border sharpness, number of callers. I mean, in ABCD you have six calls that actually, it really counts, and they check how many calls there are. When you combine it together, I mean, you write down one by one how many different patterns are available, you can count easily more than 30 patterns that actually the doctor, the medicals can find out on the image, or just, you know, using the loop. Directly. What you can also do, and what I did in my research, is that I used different kinds of wavelengths of light to get not only what you see with the visible light, but also what's actually deeper in the skin using, for example, the infrared light. Actually, I used four different wavelengths of light in total, but actually here you can see one with the infrared, UV, to get the, to get the melanin, right? The vessels more visible, and that's how you can actually do a better research, because we have more details, more information. Oh, what you also do, you can also do some image processing, like binarization in this case, right, to find out, oh, this here border is not smooth, right? You can, for example, use fractal methods to analyze that.

6. Extracting Asymmetry in Skin Images

Short description:

In this demo, I will show you an example of how to extract asymmetry in skin images using Python and JavaScript. By dividing the picture into regions and calculating the symmetry of opposite sides of the blocks, we can easily count the asymmetry. However, some patterns require more sophisticated models based on neural networks.

All right, so the demo, another one, because I would like to show you also how you can also use that, how you can, what kind of, what kind of methods you can use. Really, here's one of the examples you can find more actually using, going to my, going to my GitHub repository, here's a way how to extract the asymmetry, actually extract the region here, first of all, right? So we could take this part, this region. I use here the ISIC as you can see there and next, that's the next step to calculate, in this case i can see it's Python, because I have developed, develop the modern Python, that actually exported to JavaScript, to be used in JavaScript, and here you can use, divided the picture into some regions and calculate the symmetry of the opposite sides of the blocks. So this is how you count easy the asymmetry. And this is how you can do it in many panels, I mean do some basic image processing, it allows us to find out some of the patterns. Some of them actually are more difficult, you need to build some sophisticated models based on neural networks. Please feel free to get more about that. But what is important, because I mentioned about the neural networks and also the shallow models, so just to give you a better understanding, here you have, because we have the black box and white box battles, so the black box are neural networks, in most cases you can see here a very short network with three layers here. It's easily trained with an accuracy of 98%, very high. But if you actually draw the weights, you can see, I mean, printed weights of just one layer, it looks like that. So if you try to explain that, it's very hard, even not possible, to explain each of this number, what does it mean. So there are some explainability, explainable methods, to explain the numbers, in the weights, in the neural network, but it's more complex.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

JSNation Live 2021JSNation Live 2021
27 min
Building Brain-controlled Interfaces in JavaScript
Top Content
Neurotechnology is the use of technological tools to understand more about the brain and enable a direct connection with the nervous system. Research in this space is not new, however, its accessibility to JavaScript developers is.Over the past few years, brain sensors have become available to the public, with tooling that makes it possible for web developers to experiment building brain-controlled interfaces.As this technology is evolving and unlocking new opportunities, let's look into one of the latest devices available, how it works, the possibilities it opens up, and how to get started building your first mind-controlled app using JavaScript.
6 min
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
Featured Article
When it comes to career, Charlie has one trick: to focus. But that doesn’t mean that you shouldn’t try different things — currently a senior front-end developer at Netlify, she is also a sought-after speaker, mentor, and a machine learning trailblazer of the JavaScript universe. "Experiment with things, but build expertise in a specific area," she advises.

What led you to software engineering?My background is in digital marketing, so I started my career as a project manager in advertising agencies. After a couple of years of doing that, I realized that I wasn't learning and growing as much as I wanted to. I was interested in learning more about building websites, so I quit my job and signed up for an intensive coding boot camp called General Assembly. I absolutely loved it and started my career in tech from there.
 What is the most impactful thing you ever did to boost your career?I think it might be public speaking. Going on stage to share knowledge about things I learned while building my side projects gave me the opportunity to meet a lot of people in the industry, learn a ton from watching other people's talks and, for lack of better words, build a personal brand.
 What would be your three tips for engineers to level up their career?Practice your communication skills. I can't stress enough how important it is to be able to explain things in a way anyone can understand, but also communicate in a way that's inclusive and creates an environment where team members feel safe and welcome to contribute ideas, ask questions, and give feedback. In addition, build some expertise in a specific area. I'm a huge fan of learning and experimenting with lots of technologies but as you grow in your career, there comes a time where you need to pick an area to focus on to build more profound knowledge. This could be in a specific language like JavaScript or Python or in a practice like accessibility or web performance. It doesn't mean you shouldn't keep in touch with anything else that's going on in the industry, but it means that you focus on an area you want to have more expertise in. If you could be the "go-to" person for something, what would you want it to be? 
 And lastly, be intentional about how you spend your time and effort. Saying yes to everything isn't always helpful if it doesn't serve your goals. No matter the job, there are always projects and tasks that will help you reach your goals and some that won't. If you can, try to focus on the tasks that will grow the skills you want to grow or help you get the next job you'd like to have.
 What are you working on right now?Recently I've taken a pretty big break from side projects, but the next one I'd like to work on is a prototype of a tool that would allow hands-free coding using gaze detection. 
 Do you have some rituals that keep you focused and goal-oriented?Usually, when I come up with a side project idea I'm really excited about, that excitement is enough to keep me motivated. That's why I tend to avoid spending time on things I'm not genuinely interested in. Otherwise, breaking down projects into smaller chunks allows me to fit them better in my schedule. I make sure to take enough breaks, so I maintain a certain level of energy and motivation to finish what I have in mind.
 You wrote a book called Practical Machine Learning in JavaScript. What got you so excited about the connection between JavaScript and ML?The release of TensorFlow.js opened up the world of ML to frontend devs, and this is what really got me excited. I had machine learning on my list of things I wanted to learn for a few years, but I didn't start looking into it before because I knew I'd have to learn another language as well, like Python, for example. As soon as I realized it was now available in JS, that removed a big barrier and made it a lot more approachable. Considering that you can use JavaScript to build lots of different applications, including augmented reality, virtual reality, and IoT, and combine them with machine learning as well as some fun web APIs felt super exciting to me.


Where do you see the fields going together in the future, near or far? I'd love to see more AI-powered web applications in the future, especially as machine learning models get smaller and more performant. However, it seems like the adoption of ML in JS is still rather low. Considering the amount of content we post online, there could be great opportunities to build tools that assist you in writing blog posts or that can automatically edit podcasts and videos. There are lots of tasks we do that feel cumbersome that could be made a bit easier with the help of machine learning.
 You are a frequent conference speaker. You have your own blog and even a newsletter. What made you start with content creation?I realized that I love learning new things because I love teaching. I think that if I kept what I know to myself, it would be pretty boring. If I'm excited about something, I want to share the knowledge I gained, and I'd like other people to feel the same excitement I feel. That's definitely what motivated me to start creating content.
 How has content affected your career?I don't track any metrics on my blog or likes and follows on Twitter, so I don't know what created different opportunities. Creating content to share something you built improves the chances of people stumbling upon it and learning more about you and what you like to do, but this is not something that's guaranteed. I think over time, I accumulated enough projects, blog posts, and conference talks that some conferences now invite me, so I don't always apply anymore. I sometimes get invited on podcasts and asked if I want to create video content and things like that. Having a backlog of content helps people better understand who you are and quickly decide if you're the right person for an opportunity.What pieces of your work are you most proud of?It is probably that I've managed to develop a mindset where I set myself hard challenges on my side project, and I'm not scared to fail and push the boundaries of what I think is possible. I don't prefer a particular project, it's more around the creative thinking I've developed over the years that I believe has become a big strength of mine.***Follow Charlie on Twitter
ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!
JSNation 2022JSNation 2022
21 min
Crafting the Impossible: X86 Virtualization in the Browser with WebAssembly
WebAssembly is a browser feature designed to bring predictable high performance to web applications, but its capabilities are often misunderstood.
This talk will explore how WebAssembly is different from JavaScript, from the point of view of both the developer and the browser engine, with a particular focus on the V8/Chrome implementation.
WebVM is our solution to efficiently run unmodified x86 binaries in the browser and showcases what can be done with WebAssembly today. A high level overview of the project components, including the JIT engine, the Linux emulation layer and the storage backend will be discussed, followed by live demos.
React Advanced Conference 2023React Advanced Conference 2023
29 min
Raising the Bar: Our Journey Making React Native a Preferred Choice
At Microsoft, we're committed to providing our teams with the best tools and technologies to build high-quality mobile applications. React Native has long been a preferred choice for its high performance and great user experience, but getting stakeholders on board can be a challenge. In this talk, we will share our journey of making React Native a preferred choice for stakeholders who prioritize ease of integration and developer experience. We'll discuss the specific strategies we used to achieve our goal and the results we achieved.
React Finland 2021React Finland 2021
27 min
Opensource Documentation—Tales from React and React Native
Documentation is often your community's first point of contact with your project and their daily companion at work. So why is documentation the last thing that gets done, and how can we do it better? This talk shares how important documentation is for React and React Native and how you can invest in or contribute to making your favourite project's docs to build a thriving community

Workshops on related topic

React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
Top Content
WorkshopFree
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
React Advanced Conference 2022React Advanced Conference 2022
81 min
Introducing FlashList: Let's build a performant React Native list all together
Top Content
WorkshopFree
In this workshop you’ll learn why we created FlashList at Shopify and how you can use it in your code today. We will show you how to take a list that is not performant in FlatList and make it performant using FlashList with minimum effort. We will use tools like Flipper, our own benchmarking code, and teach you how the FlashList API can cover more complex use cases and still keep a top-notch performance.You will know:- Quick presentation about what FlashList, why we built, etc.- Migrating from FlatList to FlashList- Teaching how to write a performant list- Utilizing the tools provided by FlashList library (mainly the useBenchmark hook)- Using the Flipper plugins (flame graph, our lists profiler, UI & JS FPS profiler, etc.)- Optimizing performance of FlashList by using more advanced props like `getType`- 5-6 sample tasks where we’ll uncover and fix issues together- Q&A with Shopify team
React Summit Remote Edition 2021React Summit Remote Edition 2021
60 min
How to Build an Interactive “Wheel of Fortune” Animation with React Native
Top Content
Workshop
- Intro - Cleo & our mission- What we want to build, how it fits into our product & purpose, run through designs- Getting started with environment set up & “hello world”- Intro to React Native Animation- Step 1: Spinning the wheel on a button press- Step 2: Dragging the wheel to give it velocity- Step 3: Adding friction to the wheel to slow it down- Step 4 (stretch): Adding haptics for an immersive feel
React Advanced Conference 2023React Advanced Conference 2023
159 min
Effective Detox Testing
Workshop
So you’ve gotten Detox set up to test your React Native application. Good work! But you aren’t done yet: there are still a lot of questions you need to answer. How many tests do you write? When and where do you run them? How do you ensure there is test data available? What do you do about parts of your app that use mobile APIs that are difficult to automate? You could sink a lot of effort into these things—is the payoff worth it?
In this three-hour workshop we’ll address these questions by discussing how to integrate Detox into your development workflow. You’ll walk away with the skills and information you need to make Detox testing a natural and productive part of day-to-day development.
Table of contents:
- Deciding what to test with Detox vs React Native Testing Library vs manual testing- Setting up a fake API layer for testing- Getting Detox running on CI on GitHub Actions for free- Deciding how much of your app to test with Detox: a sliding scale- Fitting Detox into you local development workflow
Prerequisites
- Familiarity with building applications with React Native- Basic experience with Detox- Machine setup: a working React Native CLI development environment including either Xcode or Android Studio
React Summit 2023React Summit 2023
88 min
Deploying React Native Apps in the Cloud
WorkshopFree
Deploying React Native apps manually on a local machine can be complex. The differences between Android and iOS require developers to use specific tools and processes for each platform, including hardware requirements for iOS. Manual deployments also make it difficult to manage signing credentials, environment configurations, track releases, and to collaborate as a team.
Appflow is the cloud mobile DevOps platform built by Ionic. Using a service like Appflow to build React Native apps not only provides access to powerful computing resources, it can simplify the deployment process by providing a centralized environment for managing and distributing your app to multiple platforms. This can save time and resources, enable collaboration, as well as improve the overall reliability and scalability of an app.
In this workshop, you’ll deploy a React Native application for delivery to Android and iOS test devices using Appflow. You’ll also learn the steps for publishing to Google Play and Apple App Stores. No previous experience with deploying native applications is required, and you’ll come away with a deeper understanding of the mobile deployment process and best practices for how to use a cloud mobile DevOps platform to ship quickly at scale.
React Advanced Conference 2022React Advanced Conference 2022
131 min
Introduction to React Native Testing Library
Workshop
Are you satisfied with your test suites? If you said no, you’re not alone—most developers aren’t. And testing in React Native is harder than on most platforms. How can you write JavaScript tests when the JS and native code are so intertwined? And what in the world are you supposed to do about that persistent act() warning? Faced with these challenges, some teams are never able to make any progress testing their React Native app, and others end up with tests that don’t seem to help and only take extra time to maintain.
But it doesn’t have to be this way. React Native Testing Library (RNTL) is a great library for component testing, and with the right mental model you can use it to implement tests that are low-cost and high-value. In this three-hour workshop you’ll learn the tools, techniques, and principles you need to implement tests that will help you ship your React Native app with confidence. You’ll walk away with a clear vision for the goal of your component tests and with techniques that will help you address any obstacle that gets in the way of that goal.you will know:- The different kinds React Native tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting text, image, and native code elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RNTL tests and how to handle them- Options for handling native functions and components in your JavaScript tests
Prerequisites:- Familiarity with building applications with React Native- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Native Testing Library- Machine setup: Node 16.x or 18.x, Yarn, be able to successfully create and run a new Expo app following the instructions on https://docs.expo.dev/get-started/create-a-new-app/