
The Business of Open Source
Whether you're a founder of an open source startup, an open source maintainer or just an open source enthusiast, join host Emily Omier as she talks to the people who work at the intersection of open source and business, from startup founders to leaders of open source giants and all the people who help open source startups grow.
Latest episodes

Jan 6, 2021 • 14min
Positioning for Startups in the Cloud Native Ecosystem
Here's what I covered in this episode: What positioning and market segmentation is and is notThe specific positioning challenges facing companies in the cloud native ecosystemWhy it's important to identify and talk about the types of application your product benefits the mostThanks for listening, and happy new year!

Dec 16, 2020 • 31min
The Business of Cloud Native #30 with Jim Bugwadia
Jim Bugwadia, CEO and co-founder of Nirmata, talks about what has changed (and what has stayed the same) since the company started in 2013. Links:https://www.linkedin.com/in/jimbugwadia/https://nirmata.comhttps://kyverno.io

Dec 10, 2020 • 33min
The Business of Cloud Native #29 with Krishnan Subramanian
In episode 29 of The Business of Cloud Native, I talked to Krishnan Subramanian of Rishidot Research about trends he sees in how end users use cloud native technologies and how startups in the space can meet end users where they are. Links: https://rishidot.comhttps://www.linkedin.com/in/krishnansubramanian/https://twitter.com/krishnan

Dec 2, 2020 • 27min
Solving Application Networking Challenges with with Idit Levine
This conversation covers:Idit’s role at Solo.io, and what she typically does on a daily basis. Idit also talks about how her job duties have changed over the last two years, and the impact that COVID-19 had on the company.The common business reasons why customers come to Solo.io — and where they typically are in terms of cloud-native maturity. Some things that Idit has learned about customers over the last two years. In addition, Idit talks about her own journey at Solo.io and what she’s had to learn along the way.How Idit’s customers typically benefit from using distributed systems — and some of the top misconceptions that they tend to have about using them.Idit’s thoughts on the market for cloud-native technologies.LinksSolo.ioFollow Idit on TwitterSlackTranscriptEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to the Business of Cloud Native. I'm Emily Omier, your host, and today I'm chatting with Idit Levine of Solo.io. Idit, I want to start out, first of all, by thanking you for joining me. Idit: Oh, thanks so much for having me. Emily: And then, second of all, I wanted you to just start off by introducing yourself: what you do, what your company does, and also a little bit about how that translates into what you do every day, like, what activities you spend your day doing. Idit: Oh, for sure. Okay, so as you said, my name is Idit Levine. And I’m, right now, the founder and the CEO of Solo.io. I started Solo two years ago, and when I started it, my focus was try to solve our [00:01:24 unintelligible] application networking problem that we know that will come up. So, what does it mean? As you guys all know, there was a huge shift in the market between monolithic to microservices and, kind of like, moving from technology of monolithic to microservice stack mean that now we also moved to a distributed application. And it was clear to me that now everything is basically will go on the wire; any communication, small communication, between those two microservices basically will have to go to the network. And I thought that would become a big problem because stuff that we didn't need to take care of when everything was the same binary, now we need to actually figure out how to solve. And basically, I was really passionate, thought that that will be a huge problem in the ecosystem and I was very passionate to actually try to solve that. So, the idea was, how to connect, right? How to connect the application, how to connect everything related to your, eventually, application to the user.Emily: And then tell me a little bit, what do you do every day? When you start, what does an average day actually consist of?Idit: Oh, wow. So, it's really interesting, that I think it's a huge difference between now and what I was doing a year ago. Right now, basically, it's pretty simple. Corona came by and it was influence a lot of companies. I was assume that it will influence also my company, and therefore I basically freeze hiring, freeze everything, and try to do the best I can with the resources that we had. What happened is that actually, not only that we didn't was influenced, we actually over doubled our revenue every quarter. That's basically forced me to immediately grow the team to be able to actually serve all those customers. Right now, basically, the main thing that I'm focusing on is—besides the technology, of course, in the strategic of the company—is basically on growing the team. So, it's hiring, it's interviewing, it's looking for the right people, it's building. You know, basically try to grow the team as much as I can in order to basically, yeah, serve well, the customer that are asking for us to—you know, for our products. That's a lot of my focus this day.Emily: And what do you find are the business reasons? What's the business problems that cause somebody to come to you?Idit: So, as I said, once people basically is moving from monolithic to microservices, there is a lot of simple stuff that before that just natively happened inside of the organization; right now, it's a little bit more complex. So, first of all, they needed to find something to run it on, and this is what Kubernetes so great in this ecosystem is the ability to install, upgrade, and basically orchestrate their microservices. But then, as I said, simple stuff that before that people were baking into the microservices created a lot of issues, like small stuff, like how do two microservices communicate with each other? How do you make sure that they're doing it safely right now? Because as right now, it's all on the wire, so potentially, there's always a third party that could, you know, join the party. So, you really need to be safe and make sure that there is a very secure line between those microservices. And then the last thing is that because there is so many because the idea of microservices was to allow you to scale, the question is how do where the request is actually routed? So, in the [00:04:52 unintelligible], request is coming, and there is a lot of replication of the same microservices, and you have no idea basically where it's coming and where it's landing. And then it will go to the next level of the microservices, and again, not know which instance of it is basically being hit. So, now the question is, how do you get visibility to something like that? How do you know what's going on in your cluster? How do what to look for the logs when now it's distributed all over the place. So, that's a lot of problem that the organization basically started to have. As well as with this—if—before that, there was a technology called [00:05:26 api-get] that was relatively popular, but people somehow—it wasn't a must. Right now, when microservices was adopted specific in environment like Kubernetes, when everything is very cloud-wise, you know, stuff is coming up and coming down, you really wanted to make sure that you have a place that you can actually control the policy, control the [00:05:50 unintelligible], the [00:05:51 unintelligible]. And that's basically where API can help. So, that is basically—how do you manage all this networking, basically, of all these systems and applications, as an edge gateway? It's something that going inside your cluster, as well as what's going on inside the cluster after it. And that's basically, yeah, the main problem that you're solving. So, every traffic to your infrastructure, node to start, we're basically taking care of exactly of everything that you then have traffic between what called East and West, inside your cluster. And that's basically the st...

Nov 25, 2020 • 39min
Positioning Open Source Projects with Sam Selikoff
This conversation covers:Mirage’s role as an API mocking library, the value that it offers for developers, and who can benefit from using it.How Mirage empowers front end developers to create production-ready UIs as quickly as possible.How Mirage evolved into an API mocking library How Mirage differs from JSON Server Sam’s relationship to Mirage, and how it fits in with his business. Sam also talks about open source business models, and whether Mirage could work as a SaaS offering.One interesting use case for Mirage, which involves demoing software and driving sales.LinksMirageSam’s teaching siteFollow Sam on TwitterSubscribe to Sam’s YouTube ChannelTranscriptEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to the Business of Cloud Native. My name is Emily, I'm your host, and today I'm chatting with Sam Selikoff. Thank you so much for joining us, Sam.Sam: Thanks for having me.Emily: Yeah. So, today, we're going to do something a little bit different, and we're going to talk about positioning for open source projects. A lot of people talk about positioning for companies, which is also really important. And they don't always think about how positioning is important for open source. Open source maintainers often don't like to talk about marketing because you're not selling anything. But you are asking people to give you their time which, at least for some people, is actually more valuable than their money. And that means you have to make a compelling case for why it's worth it to contribute to your project, and also why they should use it, why they should care about it? So, anyway, we're going to talk with Sam, about Mirage. But first, I should let you introduce yourself. Sam, thank you so much for joining me, and can you introduce yourself a little bit?Sam: Sure. My name is Sam Selikoff. These days, I spend most of my time teaching people how to code in the form of videos on my YouTube channel, and my website, embermap.com. Most of it is front end web development focused. So, we focus on JavaScript. I have a business partner who also works with me. And then we also do custom app development, you know, some consulting throughout the year.Emily: Cool. And then tell me a little bit about Mirage.Sam: Yeah, so Mirage is the biggest open source project I've been a part of since falling into web development, I'd say about eight years ago, I got into open source pretty early on in programming, kind of what made me fall in love with web development and JavaScript. So, I was starting to help out and just get involved with existing projects and things that I was using. Eventually, I made my way to TED Talks, the conference company where I was a front end developer, and that's actually where I met my business partner, Ryan. And we were using Ember.js, which is a JavaScript framework, and we had lots of different apps at TED that were helping with various parts of publishing talks, and running conferences, and all that stuff. And we were seeing some common setup code that we were using across all these apps to help us test them, and that's where Mirage came from. There was another project called Pretender, which helped you mock out servers so that you could test your front end against different server states. And we first wrapped that with something called Pretenderify, and then it grew in complexity. So, I was working on it on my learning Wednesdays, renamed it to Mirage, and then I've been working on it basically ever since. And then, the other big step, I guess, in the history is that originally was an Ember only project, and then last year, we worked on generalizing it so that it can be used by React developers, React Native developers, Vue developers, so now it's just a general-purpose JavaScript API mocking library.Emily: So, we would say that the position is an API mocking library. And—does that sound right?Sam: Yeah. If I had to say what it is, I would say it's a mocking library that helps front end developers mock out backend API's so that they can develop and test the user interfaces without having to rely on back end services.Emily: Why does that matter?Sam: It matters because back end services can be very complicated, there can be multiple back end services that need to run in order to support a UI, and if you're a front end developer, and you just want to make a change and see what the shopping cart looks like when it's empty. What does the shopping cart look like when there's one item? What does it look like when there's 100 items, and we have to have multiple pages? All three of those states correspond to different data in some back end service, usually in a database. And so, for a front end developer, or anyone working on the user interface, really, it can be time-consuming and complex to put that actual server in that state that they need to help them develop the UI. That can involve anything from running, like, a Rails server on their computer to getting other API's that other teams manage into the state they need to develop the UI. So, Mirage lets them mock that out and basically have a fake server that they control and they can put into any state they need. So, it’s like a simplified version of back end services that the front end developer can control to help them develop and test the UI.Emily: And when you first started Mirage, did you think of it as an API mocking library?Sam: Not exactly. We used it mostly because of testing. So, in a test, it's usually a best practice to not have your test rely on an actual network. You want to be able to run your test suite of your user interface anywhere, let's say on an airplane or something like that. So, if your user interface relies on live back end services, that's usually where you would bring in a mocking library. And then you would say, okay, when the user visits amazon.com/cart, normally, it would go try to fetch the items in your cart from a real server, but in the test, we're going to say, “Oh, when my app does that, let's just respond with zero items. And then in this next test, when my app does that, let's respond with three items.” So, that's the motivation originally, is in a testing environment, giving the UI developer control over that. And then what happened was that it was so useful, we started using it in development as well, just to help during normal times, just because it was faster than working with the real back end services....

Nov 18, 2020 • 31min
Discussing Bloomberg’s Cloud Native Journey with Andrey Rybka
This conversation covers:How Bloomberg is demystifying bond trading and pricing, and bringing transparency to financial markets through their various digital offerings.Andrey’s role as CTO of compute architecture at Bloomberg, where he oversees research implementation of new compute related technologies to support kind of our business and engineering objectives.Why factors like speed and reliability are integral to Bloomberg’s operations, and how they impact Bloomberg’s operations . Andrey also talks about how they impact his approach to technology, and why they use cloud-native technology.How Andrey and his team use containers to scale and ensure reliability.Why portability is important to Bloomberg’s applications.Bloomberg’s journey to cloud-native. Some of the open-source services that Andrey and his team are using at Bloomberg.Unexpected challenges that Andrey has encountered at Bloomberg.Primary business value that Bloomberg has experienced from their cloud-native transition.LinksBloombergBloomberg GitHubFollow Andrey on TwitterConnect with Andrey on LinkedInTranscriptEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to The Business of Cloud Native, I'm your host Emily Omier. And today I'm chatting with Andrey Rybka from Bloomberg, thank you so much for joining us, Andrey.Andrey: Thank you for your invitation.Emily: Course. So, first of all, can you tell us a little bit about yourself and about Bloomberg?Andrey: Sure. So, I lead the secure computer architecture team, as the name suggests, in the CTO office. And our mission is to help with research implementation of new compute-related technologies to support our business and engineering objectives. But more specifically, we work on ways to faster provision, manage, and elastically scale compute infrastructure, as well as support rapid application development and delivery. And we also work on developing and articulating company’s compute strategic direction, which includes the compute storage middleware, and application technologists, and we also help us product owners for the specific offerings that we have in-house. And as far as Bloomberg, so Bloomberg was founded in 1981 and it's got very large presence: about 325,000 Bloomberg subscribers in about 170 countries, about 20,000 employees, and more news reporters than The New York Times, Washington Post, and Chicago Tribune combined. And we have about 6000 plus software engineers, so pretty large team of very talented people, and we have quite a lot of data scientists and some specialized technologists. And some impressive, I guess, points is we run one of the largest private networks in the world, and we move about a hundred and twenty billion pieces of data from financial markets each day, with a peak of more than 10 million messages a second. We generate about 2 million news stories—and they're published every day—and then news content, we consuming from about 125,000 sources. And the platform allows and supports about 1 million messages, chats handled every day. So, it's very large and high-performance kind of deployment.Emily: And can you tell me just a little bit more about the types of applications that Bloomberg is working on or that Bloomberg offers? Maybe not everybody is familiar with why people subscribe to Bloomberg, what the main value is. And I'm also curious how the different applications fit into that.Andrey: The core product is Bloomberg Terminal, which is Software as a Service offering that is delivering diverse array of information of news and analytics to facilitate financial decision-making. And Bloomberg has been doing a lot of things that make financial markets quite a bit more transparent. The original platform helped to demystify a lot of bond trading and pricing. So, the Bloomberg Terminal is the core product, but there's a lot of products that are focused on the trading solutions, there is enterprise data distribution for market data and such, and there is a lot of verticals such as Bloomberg Media: that's bloomberg.com, TV, and radio, and news articles that are consumer-facing. But also there is Bloomberg Law, which is offering for the attorneys, and there is other verticals like New Energy Finance, which helps with all the green energy and information that helps a lot to do with helping with climate change. And then there's Bloomberg Government, which is focused on, specifically, research around government-specific data feeds. And so in general, you've got finance, government, law, and new energy as the key solutions.Emily: And how important is speed?Andrey: It is extremely important because, well, first of all, obviously, for traders, although we're not in high-frequency game, we definitely want to deliver the news as fast as possible. We want to deliver actionable financial information as fast as possible, so definitely it is a major factor, but also not the only factor because there's other considerations like reliability and quality of service as well.Emily: And then how does this translate to your approach to new technology in general? And then also, why did you think cloud-native might be a good technology to look into and to adopt?Andrey: So, I guess if we define cloud-native, a little because I think there's different definitions; many people think of containers immediately. But I think that we need to think of outside of not just, I guess, containers, but I guess the container orchestration and scaling elastically, up and down. And those, I guess, primitives. So, when we originally started on our cloud-native journey, we had this problem of we were treating our machines as pets if you know the paradigm of pets versus cattle where pet is something that you care for, and there’s, like, literally the name for it, you take it to the vet if it gets sick. And when you use think of herd of cattle, there's many of them, and you can replace, and you have quite a lot of understanding of scalability with the herd versus pets. So, we started moving towards that direction because we wanted to have more uniform infrastructure, more heterogeneous. And we started with VMs. So, we didn't necessarily jump to containers. And then we started thinking like, “Is VMs the right abstraction?” And for some workloads it is, but then in some cases, we started thinking, “Well, maybe we need something more lightweight.” So, that's how we started looking at containers because ...

Nov 11, 2020 • 23min
How Systematic Approaches Cloud-Native with Thomas Vitale
This conversation covers:An average workday for Thomas as senior systems engineer at Systematic.How Systematic uses cross-functional collaboration to solve problems and produce high quality software.How security and data privacy relate to cloud-native technologies, and the challenges they present. Systematic’s journey to cloud native, and why the company decided it was a good idea. Why it’s important to consider the hidden costs and complexities of cloud-native before migrating.What makes an application appropriate for the cloud, and some tips to help with making that decision.The biggest surprises that Thomas has encountered when moving applications to cloud-native technology. Thomas’s new book, Cloud Native Spring in Action, which is about designing and developing cloud-native applications using Spring Boot, Kubernetes, and other cloud-native technologies. Thomas also talks about who would benefit from his book.Thomas’s background and experience using cloud-native technology.The biggest misconceptions about cloud-native, according to Thomas.LinksSystematicCloud Native Spring in Action bookThomas Vitale personal websiteFollow Thomas on TwitterConnect with Thomas on LinkedInTranscriptEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to The Business of Cloud Native. I'm your host, Emily Omier, and today I'm chatting with Thomas Vitale. Thomas, thanks so much for joining us.Thomas: Hi, Emily. And thanks for having me on this podcast.Emily: Of course. I just like to start by asking everyone to introduce themselves. So, Thomas, can you tell us a little bit about what you do and where you work, and how you actually spend your day?Thomas: Yes, I work as a senior systems engineer at Systematic. That is a Danish company, where I design and develop software solutions in the healthcare sector. And I really like working with cloud-native technologies and, in particular, with Java frameworks, and with Kubernetes, and Docker. I'm particularly passionate about application security and data privacy. These are the two main things that I've been doing, also, in Systematic.Emily: And can you tell me a little bit about what a normal workday looks like for you?Thomas: That's a very interesting question. So, in my daily work, I work on features for our set of applications that are used in the healthcare sector. And I participate in requirements elicitation and goal clarification for all new features and new set of functionality that we'd like to introduce in our application. And I'm also involved in the deployment part, so I work on the full value stream, we could say. So, from the early design and development, and then deploying the result in production.Emily: And to what extent, at Systematic, do you have a division between application developers and platform engineers, or however else you want to call them—DevOps teams?Thomas: In my project, currently, we are going through what we can call as maybe a DevOps transformation, or cloud transformation because we started combining different responsibilities in the same team, so in a DevOps culture, where we have a full collaboration between people with different expertise, so not only developers but also operators, testers. And this is a very powerful collaboration because it means putting together different people in a team that can bring an idea to production in a very high-quality way because you have all the skills to actually address all the problems in advance, or to foresee, maybe, some difficulties, or how to better make a decision when there's different options because you have not only the point of view of a developer—so how is better the code—but also the effects that each option has in production because that is where the software will live. And that is the part that provides value to the customers. And I think it's a very important part. When I first started being responsible, also, for the next part, after developing features, I feel like I really started growing in my professional career because suddenly, you approach problems in a totally different way. You have full awareness of how each piece of a system will behave in production. And I just think it's, it's awesome. It's really powerful. And quality-wise, it's a win-win situation.Emily: And I wanted to ask also about security and data privacy that you mentioned being one of your interests. How do those two concepts relate to cloud-native technologies? And what are some of the challenges in being secure and managing data privacy specifically for cloud-native?Thomas: I think in general, security has always been a critical concern that sometimes is not considered at the very beginning of the development process, and that's a mistake. So, the same thing should happen in a cloud-native project. Security should be a concern from day one. And the specific case of the Cloud: if we are moving from a more traditional system and more traditional infrastructure, we have a set of new challenges that have to be solved because especially if we are going with a public cloud, starting from an on-premise solution, we start having challenges about how to manage data. So, from the data privacy point of view, we have—depending also on the country—different laws about how to manage data, and that is one of the critical concerns, I think, especially for organizations working in the healthcare domain, or finance—like banks. The data ownership and management can really differ depending on the domain. And in the Cloud, there's a risk if you're not managing your own infrastructure in specific cases. So, I think this is one of the aspects to consider when approaching a cloud-native migration: how your data should be managed, and if there is any law or particular regulation on how they should be managed.Emily: Excellent. And can you actually tell me a little bit about Systematic’s journey to cloud-native and why the company decided that this was a good idea? What were some of the business goals in adopting things like Docker and Kubernetes?Thomas: Going to the Cloud, I think is a successful decision when an organization has those problems that the cloud-native technologies attempt to solve. And some goals that are commonly addressed by cloud-native technologies are, for example, scalability. We gain a lot of possibilities to scale our applications, not only in terms of computational resources, and le...

Nov 4, 2020 • 39min
Discussing Forter with CTO Iftah Gideoni
This conversation covers:The value that Forter provides, and the types of companies that they work with. Iftah also explains what makes Forter so unique. The underlying technology that Forter is using, and how they quickly process hundreds of complex backend workflows. Iftah also talks about some of the tools that they are using, including AWS and Apache Storm.How Forter approaches the cloud, and how it’s helping them concentrate on the business of detecting fraud. In addition, talks about the types of cloud services that Forter is using.Forter’s ability to scale — including how they responded to increased customer demand during COVID-19.Forter’s biggest technical challenge that they are currently working through.Iftah’s thoughts on the security- speed tradeoff.Links:ForterForter on TwitterConnect with Iftah on LinkedInIftah’s email: iftah@forter.comTranscript:Emily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to The Business of Cloud Native. I'm Emily Omier, your host, and today I'm chatting with Iftah Gideoni. Iftah is the CTO at Forter. Iftah, first of all, thank you so much for joining me.Iftah: Very glad to be here.Emily: So, I wanted to have you start by introducing yourself and what you do, and then also what Forter does.Iftah: Hi, I'm Iftah. I’m a physicist of education, and in the last 20 years, a CTO of several companies, mostly [00:01:11 unintelligible] governmental companies, and companies that I founded. In the last six and a half years, I'm with Forter. And what Forter started to do from 2014 is to provide what was, at the time, very bold vision of fully automated, fully cloud-based decisions about whether to allow or decline e-commerce transactions. Now, from that time we actually implemented and executed that, we decide very many more than 3 million transactions every day, today, all in real-time without a human in the loop. And we expanded into being a fully-fledged trust engine that gives decisions not only about transactions, but about many other points of interaction with the consumer, for example, in their login time, and in other points where trust decision is needed.Emily: So, just because I think it might be interesting to listeners, give me some examples of, like, when somebody might interact with Forter or have some sort of action approved or declined by Forter.Iftah: Right. The prime customers of Forter are the big e-commerce enterprises. Think about the [00:02:42 Sephoras], the Nordstroms, the Home Depots, and this kind of companies. And whenever you press the button of requesting to committing to the purchase and you see this small things rounding on the screen, then it is sent to Forter and Forter within, usually, half a second returns a decision. Now, Forter does not act as an additional data point, or input, or score into some system of the merchant. It actually answer whether to approve or decline the transaction. In very many—and most of the revenue of Forter comes from a covered transaction that, if this transaction was fraud, it’s on Forter. Forter will guarantee it. And we were pioneering this model to putting our mouth where our money is.Emily: Tell me just a little bit about why this is so difficult. What makes what Forter does unique?Iftah: What Forter does is unique because it tells the human story, and takes it all the way to the decision itself. For example, it's very easy to approve the fourth transaction of a person that is sitting at home, browsing from home, making the purchase on the same desktop they made at previous times, and sending the shipment to the same home. That's very easy. But we want to be able to approve the traveler, the person that is sending a gift to a third party, or a person that is sending a gift to another state while not browsing from home and not from his common device. We want to be able to approve those transactions that are checking out as guests from a new device and that's the first time this person ever appeared on our radar. And the ability to do that and to take the calculated risks and to look at the behavior, the cyber clues, and still be able to tell that this is indeed a new person and not someone that visited before and is trying now to hide. That's what makes what we do very difficult and complex.Emily: So, tell me a bit about the technology story. What technology do you use to accomplish this, and how does it work? What does your stack look like?Iftah: When I came to—from 2014, I looked at the system and what is actually needed in order to cater to such a complex story? And I thought to myself—and we'll talk about maybe a bit later about how all this is excellently suited for the Cloud, but what I found that throughput and big data is not the problem. First, it’s more or less solved, but it is the e-commerce business; it's not Facebook scale throughput. And on the other hand, it's not hardcore real-time, right? We're talking about tens of milliseconds, not the microseconds domain. What is extreme about what we do is the complexity of the flow. We have hundreds of processes that are needed to be ran within that half a second in order to test, and check, and infer, and decide on many aspects of this transaction and of this person. So, first, we started from Amazon Web Services, and we started with, actually, Apache Storm. And why we decided that because we wanted to have something that enables first, a lot of parallelism—doing many things in parallel—with smart joins, that is with processes that takes information from other processes that executed in parallel, and can decide whether what they have so far from these processes is enough. Because we are very high availability, we didn't lose more than 10 seconds straight in the last four years. We are very high availability, but a lot of our sub-processes are not. So, you need such a machine that will be able to infer about whether the information at hand is good enough and to move forward and still give, after half a second, the answer. We also wanted to have within this high availability system, we wanted to have the domain experts, the analysts, and the fraud researchers, we wanted to give them a very direct access to the code and each insight that they get, in close to real-time, maybe in 10 or 15 minutes from the time that they understood that there is a new wave of attacks or a new fraudster in action in a particular store or across stores. We wanted all these insights to be manifested in the sys...

Oct 28, 2020 • 31min
Aligning Open-Source and Business Goals with Tobie Langel
This conversation covers:Laying the groundwork for a successful open-source program office (OSPO).Why legal and engineering are usually the two main stakeholders in open-source projects.Why engineering teams tend to struggle at articulating their perspective on open-source. Tobie offers some improvement tips. How Tobie defines open-source strategy. Tobie also explains the risk of not having an open-source strategy, as well as his process for helping organizations determine the best strategy for their needs.Common challenges that businesses face when deploying open-source software. The secondary — or non-code — benefits of open-source, and why many organizations tend to overlook them.Tips for engineers in non-technology organizations like pharmaceuticals or finance to approach business leadership about open-source. LinksUnlockOpen: https://unlockopen.com/ Twitter: https://twitter.com/tobieTranscriptEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to The Business of Cloud Native. Today, I am talking with Tobie Langel from UnlockOpen, and I wanted to start, Tobie, by just asking, you know, what do you do? Can you give us sort of an introduction to what you do, and how you tend to spend your days?Tobie: Sure. So, I've been back into consulting for a number of years at this point. And I essentially focus on helping organizations align their open-source strategy with business goals. So, it can be both at the project level—so sometimes helping specific projects out—or larger strategy at the corporate level.Emily: So, I actually recently had Nithya Ruff, who's the head of the OSPO at Comcast on the podcast. For listeners who don't know, that's an open-source program office. So, are you sort of an outsourced OSPO for companies that aren't Comcast’s size?Tobie: So, that's a really good question. My answer would be no, but it tends to happen that I help companies build that capacity internally. So, I would generally tend to come up before an OSPO is needed, and help them figure out what exactly they need to build. For OSPO, my pet peeve is companies building OSPOs like they need to tick a checkbox on the list of the things that they have to do to be up-to-date with good engineering practices, if you will. In general, if you want to be successful, with an OSPO, it has to meet the particular needs of your company, and that's usually kind of hard to figure out if you just leave it to whoever in the organization is more interested in driving that effort. And so essentially, I sort of help in the early stages of that by bringing all of the stakeholders at the table, and essentially listening to them and making sure that what they want out of an OSPO is aligned between the different stakeholders and matches the overall strategy of the company.Emily: And who are the stakeholders that you're generally talking to?Tobie: So, essentially, open-sources is strange, for one reason, in terms of how it was adopted in companies from a historical perspective. Adopters have always been essentially engineers who just wanted better tools, or the package or the software that best fitted their current intention, and there's a very, very grassroots process by which companies start using open-source. And what happened at some point is companies sorted to see all of the software, and got concerned, and started trying to assess the risk. And so companies just tended to bring in the legal arm and lawyers at this point. And so to fulfill compliance questions, you bring in lawyers, and then the responsibility of grown-up open-source kind of falls on to lawyers, which tends to be problematic from the perspective of good engineering practice and velocity that you want from your engineering and product side in a company. And so clearly, the two stakeholders or the two main stakeholders tend to be legal and engineering, and there tends to be a tension between these two sides. And in lots of companies this tension, instead of being resolved to some degree, tends to be won by the legal side that understands business concerns better and is better able to praise or explain what they do in terms of business impact and business risks than the engineering side. And so this equilibrium tends to create OSPOs which are legal heavy, process heavy, and don't really give engineers the kind of freedom that they would need to be effective in their daily engineering practice. And the reason behind that being essentially over exaggerated risk perception of open-source because, to be frank, open-source is not well taught in legal school and clearly not part of the curricular that most lawyers are familiar with when they move into helping tech companies out. So, essentially, I sort of tried to bridge these two worlds.Emily: I can imagine that being an open-source lawyer, that's a niche, that's a very specific niche.Tobie: Yeah, actually there's a running joke in that community, which is, “As soon as you get your law degree and you’re an open-source lawyer, you’re one of the 25 best open-source lawyers in the world.”Emily: [laughs]. That's awesome. Why do you think engineering teams are so bad at clearly articulating their perspective on open-source, and what can they do to improve?Tobie: So, there are clearly multiple reasons why engineers aren't the best at articulating how open-source matters. So, I think one of the key ones, it's just, it's something that's part of their daily practice, and they don't really understand and never have been taught the actual intellectual property—IP—impact, that open-source has on their company, and they don't really understand how others in the company might perceive this IP impact. So, I think, one part of it is, essentially, this is just how engineers work. Like, you want to use a piece of software, you put it in it, right? If you want to fix something, well, you do a pull request. This is sort of, like, a common practice. And it's always hard to articulate things that are essentially part of your, like—you know, like a native language, like part of your culture. It's really hard to describe, why you would do this, and why it matters. So, I think that's one reason.The other reason, I think, is that there is a lot of overlap between the way legal works, and the way business works in general. Few examples of that are, engineers tend to think really like in binary way, like, you know, something is true or false, something is on or off, whereas business and law a much more spectrum thinking and into the gray area of things. Similarly, law will share with executive manager’s schedule, versus a maker’s schedule. So, there's lots of cultural artifacts of law culture in corporat...

Oct 21, 2020 • 28min
Exploring Open-Source and Cloud-Native with Tracy Miranda
The conversation covers: Tracy’s thoughts on how the relationship between open-source and cloud-native should be described.The advantages and disadvantages to an organization using open-source.Some of the major risks associated with using open-source, and why companies should approach with caution. Why CI/CD is a rising security concern for open-source organizations.Tracy also provides her thoughts on how businesses are handling the CI/CD pipeline today, and where the trend is heading.Some of the unresolved challenges related to continuous delivery that currently exist.Tracy’s advice for companies that are just starting to develop an open-source contribution strategy.How companies should approach topics like open-source strategizing and building open-source communities.The common mistakes that individuals and companies make when nurturing open-source communities. Tracy also comments on mistakes that people are making with continuous delivery.LinksCloudBees: https://www.cloudbees.com/Continuous Delivery Foundation: https://cd.foundation/Twitter: https://twitter.com/tracymiranda Emily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to The Business of Cloud Native. Today, I'm chatting with Tracy Miranda. Tracy, thank you so much for joining me.Tracy: Hi, Emily. Thanks for having me. It's my pleasure.Emily: So, as usual, I just want to start off with having you introduce yourself, both what you do, where you work, but also, like, some details, what does this actually mean? How do you actually spend your day?Tracy: Yeah, so I'm the director of open-source CloudBees, and I'm also the board chair at the Continuous Delivery Foundation, which is an open-source foundation, which is home to projects like Jenkins, and Spinnaker, and Tecton, and Jenkins X. So, basically, I'm a big fan of all things open-source, which in day-to-day means I'm doing anything which is related to building communities. So, either involved with code, or building communities and through conferences, or sometimes just the boring governance stuff around open-source.Emily: What is the boring governance stuff around open-source?Tracy: So, I guess it is just trying to get folks moving in the same direction, and reminding people that it's sometimes more than just code. And whether it's updating a code of conduct, and one of the things we've seen and—okay, I wouldn't call this boring; it's actually taken over a bit in open-source communities, but it's sort of different from the code, but it's the whole terminology updates. We've seen a lot of open-source communities have become more aware about wanting to be better about using terms like ‘master’ and ‘slave’ and move away from that. That being said, it's not that easy, so there's a lot to do in getting people on the same page and ready to move forward even before you can start changing a line of code.Emily: Since the topic of the podcast is cloud-native, obviously, open-source and cloud-native are related. In fact, some people think that cloud-native must be open-source. Where do you fall on that spectrum? How do you think the relationship between open-source and cloud-native should be described?Tracy: Yeah, I think that they're pretty distinct things. So, cloud-native is all about using the Cloud effectively and having technology which takes advantage of modern architectures to give you things like rapid elasticity, or on-demand self-service. And that's distinct from open-source, which is around the licensing, and it's become more about communities, as well. But I think because Kubernetes has been the most successful cloud-native project that is open-source, I guess there's become this very, very strong association which, in my mind, is a very, very good thing because I think open-source communities are really the way to drive innovation very, very quickly across the industry.Emily: And this may seem sort of obvious, but what are some of the advantages and disadvantages to an organization in using open-source?Tracy: Yes. So, I think—well, lots—virtually every company uses open-source, and the first thing people can see as the benefits are just the engineering efficiencies. So, using technologies which, say aren’t core to the business, but then building on top of those and taking advantage of the features rather than dedicating their own engineering resources to developing them. I used to work as a consultant, and I would go from company to company, and usually, they would be adopting open-source when they wanted to get away from an in-house project where the people or person who had written it had left the company. So, I think there's a lot to be said, as well, for sustainability of technology: that communities and open-source communities are really good at sustaining projects over the long term, and therefore kind of the best bet for technology that's going to live on beyond individuals or even companies, acquisitions, or whatever.Emily: Do you think there are any risks to using open-source? I'm even interested in hearing if there are risks that are not real, but that are perceived risks. And then even maybe some risks that people don't think about, but that are in fact, quite real.Tracy: Yes, yeah, no, absolutely there are risks. So, it's wise for companies to approach with caution. I think the risks sort of depend on which side—like, are you looking to just use open-source that someone else has written, or are you contributing something, which might be key to your company, but then you’re saying, “Okay, I'm going to do this in an open way,” which brings us to one of those common perceived myths, that someone, like a cloud provider, is then going to take your open-source software and do a better job of making money around it, so thereby just ruining your entire business model.And I think the other area where we tend to see a lot of dialogue around, is always around open-source security. For a long time, people used to, sort of, make out that this was different from closed source security, somehow. Security through obscurity meant that closed-source was better than open-source, which is clearly not the case. You can have secure open-source software, not secure open-source software. It just really depends on the project and the practices.Emily: And then also, I thought we'd talk a little bit specifically about this CI/CD work that you do. How important is CI/CD, do you think, in the pursuit of being cloud-native?Tracy: Yes, no, I think CI/CD h...