
Cloud Engineering Archives - Software Engineering Daily
Episodes about building and scaling large software projects
Latest episodes

Jan 5, 2018 • 52min
Cloud R&D with Onsi Fakhouri
In the first 10 years of cloud computing, a set of technologies emerge that every software enterprise needs; continuous delivery, version control, logging, monitoring, routing, data warehousing. These tools were built into the Cloud Foundry project, a platform for application deployment and management.
As we enter the second decade of cloud computing, another new set of technologies is emerging as useful tools. Serverless functions allow for rapid scalability at a low cost. Kubernetes offers a control plane for containerized infrastructure. Reactive programming models and event sourcing make an application more responsive and simplify the interactions between teams who are sharing data sources.
The job of a cloud provider is to see new patterns in software development and offer tools to developers to help them implement those new patterns. Of course, building these tools is a huge investment. If you’re a cloud provider, your customers are trusting you with the health of their application. The tool that you build has to work properly and you have to help the customers figure out how to leverage the tool and resolve any breakages.
Onsi Fakhouri is the senior VP of R&D for cloud at Pivotal, a company that provides software and support for Spring, Cloud Foundry, and several other tools. I sat down with Onsi to discuss his strategy for determining which products Pivotal chooses to build. There is a multitude of engineering and business elements that Onsi has to consider when allocating resources to a project.
Cloud Foundry is used by giant corporations like banks, telcos, and automotive manufacturers. Spring is used by most enterprises that run Java, including most of the startups that I have worked at in the past. Cloud Foundry has to be able to run on-premise and in the cloud providers like AWS, Google and Microsoft. Pivotal also has its own cloud, Pivotal Web Services, and all of these stakeholders have different technologies that they would like to see built. Onsi’s job is to determine which ones have the highest net impact and make a decision on those and allocate resources towards them.
I interviewed Onsi at Spring One Platform, which is a conference that is organized by Pivotal who, full disclosure, is a sponsor of Software Engineering Daily. This week’s episodes are all conversations from that conference, and if there’s a conference that you think I should attend and do coverage at, let me know. Whether you like this format or not, I would love to get your feedback. We have some big developments coming for Software Engineering Daily in 2018 and we want to have a closer dialogue with the listeners. Please send me an email, jeff@softwareengineeringdaily.com or join our Slack channel.
The post Cloud R&D with Onsi Fakhouri appeared first on Software Engineering Daily.

Jan 3, 2018 • 52min
Cloud Foundry with Rupa Nandi
Cloud Foundry is an open-source platform as a service for deploying and managing web applications. Cloud Foundry is widely used by enterprises who are running applications that are built using Spring, a popular web framework for Java applications, but developers also use Cloud Foundry to manage apps built in Ruby, Node and any other programming language. Cloud Foundry includes routing, message brokering, service discovery, authentication and other application level tooling for building and managing a distributed system. Some of the standard tooling in Cloud Foundry was adopted from Netflix open-source projects, such as Hystrix, which is the circuit breaker system; and Eureka, which is the service discovery server and client.
When a developer deploys their application to Cloud Foundry, the details of what is going on are mostly abstracted away, which is by design. When you’re trying to ship code and iterate quickly for your organization, you don’t want to think about how your application image is being deployed to underlying infrastructure. You don’t want to think about whether you’re deploying a container or a VM, but if you use Cloud Foundry enough, you might have become curious about how Cloud Foundry schedules and runs application code.
BOSH is a component of Cloud Foundry that sits between the infrastructure layer and the application layer. Cloud Foundry can be deployed to any cloud provider because of BOSH’s well-defined interface. BOSH has the abstraction of a stem cell, which is a versioned operating system image wrapped in packaging for whatever infrastructure as a service is running underneath. With BOSH, whenever a VM gets deployed no your underlying infrastructure, that VM gets a BOSH agent. The agent communicates with the centralized component of BOSH called the director. This role of director is the leader of the distributed system.
Rupa Nandi is a director of engineering at Pivotal where she works on Cloud Foundry. In this episode we talked about scheduling an infrastructure, the relationship between Spring and Cloud Foundry and the impact of Kubernetes, which Cloud Foundry has integrated with so that users can run Kubernetes workloads on Cloud Foundry.
I interviewed Rupa at SpringOne Platform, a conference that is organized by Pivotal who, full disclosure, is a sponsor of Software Engineering Daily, and this week’s episode are all conversations from that conference. Whether you like this format or don’t like this format, I would love to get your feedback. We have some big developments coming for Software Engineering Daily in 2018 and we want to have a closer dialogue with the listeners. Please send me an email, jeff@softwareengineeringdaily.com or join our Slack channel. We really want to know what you’re thinking and what your feedback is, what you would like to hear more about, what you’d like to hear less about, who you are.
The post Cloud Foundry with Rupa Nandi appeared first on Software Engineering Daily.

Dec 15, 2017 • 56min
High Volume Logging with Steve Newman
Google Docs is used by millions of people to collaborate on documents together. With today’s technology, you could spend a weekend coding and build a basic version of a collaborative text editor. But in 2004 it was not so easy.
In 2004 Steve Newman built a product called Writely, which allowed users to collaborate on documents together. Initially, Writely was hosted on a single server that Steve managed himself. All of the reads and writes to the documents went through that single server. Writely rapidly grew in popularity, and Steve went through a crash course in distributed systems as he tried to keep up with the user base.
In 2006, Writely was acquired by Google, and Steve spent his next four years turning Writely into Google Docs. Eventually he moved onto other projects within Google—“Cosmo” and “Megastore Replication.” When Steve left the company in 2010, he took with him the lessons of logging and monitoring that keep Google’s infrastructure observable.
Large organizations have terabytes of log data to manage. This data streams off the servers that are running our applications. That log data gets processed in a “metrics pipeline” and turned into monitoring data. Monitoring data aggregates log data in a more presentable format.
Most of the log messages that get created will never be seen with human eyes. These logs get aggregated into metrics, then compressed, and (in many cases) eventually thrown away. Different companies have different sensitivity around their logs, so some companies may not garbage collect any of their logs!
When a problem occurs in our infrastructure, we need to be able to dig into our terabytes of log data and quickly find the root cause of a problem. If our log data is compressed and stored on disk, it will take longer to access it. But if we keep all of our logs in memory, it could get expensive.
To review: if I want to build a logging system from scratch today I need to build: a metrics pipeline for converting log data into monitoring data; a complicated caching system, a way to store and compress logs; a query engine that knows how to ask questions to the log storage system; a user interface so I don’t have to inspect these logs via command line…
The list of requirements goes on and on—which is why there is a huge industry around log management. And logging keeps evolving! One example we covered recently is distributed tracing, which is used to diagnose requests that travel through multiple endpoints.
After Steve Newman left Google, he started Scalyr, a product that allows developers to consume, store, and query log messages. I was looking forward to talking to Steve about data engineering, and the query engine that Scalyr has architected, but we actually spent most of our conversation talking about the early days of Writely, and his time at Google—particularly the operational challenges of Google’s infrastructure. Full disclosure: Scalyr is a sponsor of Software Engineering Daily.
The post High Volume Logging with Steve Newman appeared first on Software Engineering Daily.

Dec 14, 2017 • 44min
Scala at Duolingo with Andre Kenji Horie
Duolingo is a language learning platform with over 200 million users. On a daily basis millions of users receive customized language lessons targeted specifically to them. These lessons are generated by a system called the session generator.
Andre Kenji Horie is senior engineer at Duolingo. He wrote about the process of rewriting the session generator, moving from Python to Scala and changing architecture at the same time. In this episode Adam Bell talks with him about the reasons for the rewrite, what drove them to move to Scala and the experience of moving from one technology stack to another.
Show Notes
Rewriting Doulingo’s Engine in Scala
Jobs at Duolingo
The post Scala at Duolingo with Andre Kenji Horie appeared first on Software Engineering Daily.

Dec 12, 2017 • 56min
Cloud Marketplace with Zack Bloom
Ten years ago, if you wanted to build software, you probably needed to know how to write code. Today, the line between “technical” and “non-technical” people is blurring.
Website designers can make a living building sites for people on WordPress or Squarespace–without knowing how to write code. Salesforce integration experts can help a sales team set up complicated software–without knowing how to write code. Shopify experts can set up an ecommerce store to your exact specifications–without knowing how to write code.
WordPress, Squarespace, Salesforce, and Shopify are all fantastic services–but they are not compatible with each other. I can’t install a WordPress plugin on Salesforce.
Now imagine this from the point of view of plugin creators. Plugin creators make easy ways to integrate different pieces of software together. Take PayPal as an example. PayPal wants to make it easy for software builders to integrate with their API.
One plugin that PayPal has is a button that says “Pay with PayPal.” If I am a developer at PayPal, and I am building a button that people should be able to easily put on their webpage so that their users can pay with PayPal, I have to create a button that is compatible with WordPress, and Squarespace, and Wix, and Weebly, and GoDaddy, and Blogger, and all the other website builders that I might want to integrate with.
In 2014, Zack Bloom started a company called Eager. Eager was a cloud app marketplace which allowed app developers to make flexible plugins that non-technical users could drag and drop into their site without technical expertise.
In order for these non-technical users to add any apps from the Eager marketplace to their webpage, they had to drop in a line of JavaScript–which is, unfortunately, a significant hurdle for a nontechnical user.
Eager proved to be a useful distribution mechanism for plugin developers who could write a plugin once and get distributed to multiple plugin marketplaces. But Eager was not as widely used as a way to directly drag and drop plugins onto sites.
The question was: how do you build a marketplace for non-technical users to add plugins to any website without forcing the non-technical user to write code? How do you make editing any website as easy as a WYSIWYG editor?
The CDN turns out to be the perfect distribution platform for these kinds of apps. Users already integrate with a CDN, so the CDN can do the work of inserting the code that allows the plugins to be added to a user’s webpage.
Because of the opportunity for the integration between a plugin marketplace and a CDN, Eager was acquired by Cloudflare, and Eager became Cloudflare apps. Zack Bloom joins the show today to discuss the motivations for his company, the engineering behind building a cloud app marketplace, and the acquisition process of his company Eager.
The post Cloud Marketplace with Zack Bloom appeared first on Software Engineering Daily.

Dec 11, 2017 • 1h 7min
Scalable Multiplayer Games with Yan Cui
Remember when the best game you could play on your phone was Snake?
In 1998, Snake was preloaded on Nokia phones, and it was massively popular. That same year Half-Life won game of the year on PC. Metal Gear Solid came out for Playstation. The first version of Starcraft also came out in 1998.
In 1998, few people would have anticipated that games with as much interactivity as Starcraft would be played on mobile phones twenty years later. Today, mobile phones have the graphics and processing power of a desktop gaming PC from two decades ago.
But one thing still separates desktop gaming from mobile gaming: the network.
With desktop gaming, users have a reliable wired connection that keeps their packets moving over the network with speeds that let them compete with other users. With mobile gaming, the network can be flaky. How do we architect real-time strategy games that can be played over an intermittent network connection?
Yan Cui is an engineer at Space Ape Games, a company that makes interactive multiplayer games for mobile devices. In a previous episode, Yan described his work re-architecting a social networking startup where the costs had gotten out of control. Yan has a skill for describing software architecture and explaining the tradeoffs.
When architecting a multiplayer mobile game, there are many tradeoffs to consider. What do you build and what do you buy? Do you centralize your geographical deployment to make it easier to reconcile conflicts, or do you spread your server deployment out globally? What is the interaction between the mobile clients and the server?
The question of interaction between client and server for a mobile game has lessons that are important for anyone building a highly interactive mobile application.
For example, think about Uber. When I make a request for a car, I can look at my phone and see the car on the map, slowly approaching me. The driver can look at his phone and see if I move across the street.
This is accomplished by synchronizing the data from the driver’s phone and my phone in a centralized server, and sending the synchronized state of the world out to me and the driver. How much data does the centralized server need to get from the mobile phones? How often does it need to make those requests?
The answers to these questions will vary based on bandwidth, device type, phone battery life, and other factors.
There are similar problems in mobile game engineering, when users are in different players on a virtual map. They are fighting each other, trying to avoid enemies, trying to steal power ups from each other. Mobile games can be even more interactive than a ridesharing app like Uber, so the questions of data synchronization can be even harder to answer.
On Software Engineering Daily, we have explored the topic of real-time synchronization in our past shows about the infrastructure of Uber and Lyft. To find these old episodes, you can download the Software Engineering Daily app for iOS and for Android. In other podcast players, you can only access the most recent 100 episodes. With these apps, we are building a new way to consume content about software engineering. They are open-sourced at github.com/softwareengineeringdaily. If you are looking for an open source project to get involved with, we would love to get your help.
Show Notes
Yan Cui’s new video course: AWS Lambda in Motion
The post Scalable Multiplayer Games with Yan Cui appeared first on Software Engineering Daily.

Dec 8, 2017 • 1h 15min
Decentralized Objects with Martin Kleppmann
The Internet was designed as a decentralized system.
Theoretically, if Alice wants to send an email to Bob, she can set up an email client on her computer and send that email to Bob’s email server on his computer. In reality, very few people run their own email servers. We all send our emails to centralized services like Gmail, and connect to those centralized services using our own client—a browser on our laptop or a mobile application on our smart phone.
Gmail is popular because nobody wants to run their own email server—it’s too much work. With Gmail, our emails our centralized, but centralization comes with convenience.
Similar centralization happened with online payments.
If Alice wants to send $5 to Bob, she needs to go through centralized banking infrastructure. Alice tells her bank to send $5 from her bank account to Bob’s bank account. This is not how it works in the physical world. if Alice wants to pay cash to Bob, she doesn’t have to go and meet him at a physical bank. She just takes out a $5 bill from her wallet and hands it to him.
The invention of Bitcoin proved that digital wallets and peer-to-peer payments are possible. But running your own wallet is like running your own email server. It is inconvenient, and so we trade decentralization for convenience once again. We use services like Coinbase, where users buy and sell cryptocurrencies in a centralized provider.
There are people in the cryptocurrency community who hate the idea of Coinbase. These people keep their cryptocurrency spread out on their own hardware wallets. Some of these people also run their own email servers.
Are these people just adding unnecessary inconvenience to their lives for no reason? No. These are smart, successful people. They don’t like to waste time. So what are they doing running their own email servers?
Distributed systems theory teaches the risk of centralized computer systems. If you have a single server that all your communication has to be routed through, your computer network will stop functioning if that server dies.
Today, civilization is reliant on centralized computer systems. This is fundamentally dangerous. The 2008 financial crisis proved how risky it is to centralize our money in the hands of a few people. The Equifax breach proved how risky it is to centralize our identity in the hands of a few people.
What happens if Dropbox runs out of money and has to shut down? What happens if all of the data centers at Amazon Web Services get wiped simultaneously? What happens if Coinbase gets hacked and every user at Coinbase loses all their money?
We have seen centralized systems collapse. The people who are running their own email servers are not crazy. Even if Gmail disappears tomorrow, they will still have access to their emails. With the example of email, we see that deploying and managing a decentralized system is possible.
Decentralization is a desirable feature of computer systems. So how do we make more of our applications decentralized?
The cypherpunks were working for decades to make decentralized money a reality. Satoshi Nakamoto invented the blockchain, and we now have a computer science construct that enables decentralized money. The blockchain also enables many other decentralized applications.
By solving a specific problem, Satoshi came up with a general solution. This is how progress often happens in computer science. In order to fix a system, we create a new tool. That tool can be applied to other systems that we don’t anticipate.
The blockchain is a tool that solves one set of problems in distributed systems. Conflict-free replicated data types are another type of tool.
Conflict-free replicated data types (or CRDTs for short) are objects that can be mutated by multiple users at the same time without creating data corruption. The most common example of a conflict-free replicated data type is the shopping cart.
Let’s say Alice and Bob share an account on an ecommerce web site. Alice is building a house, and she wants to buy some tools online. Alice has a shopping cart with a hammer in it. Bob logs into the ecommerce web site from a different computer at the same time Alice is logged in. Bob just wants to buy a tuxedo—he doesn’t know why Alice left a hammer in the shopping cart, so he clicks a button to remove all the items from the shopping cart. At the exact same moment, Alice clicks from her computer to add a drill to the shopping cart.
The server receives both requests: Bob wants to delete all items in the shopping cart, Alice wants to add a drill to the shopping cart. Both requests occurred at the exact same time, but we have to decide how to process them in some order. This is a situation known as a conflict.
Which request should execute first? Should the resulting shopping cart be empty? Should the shopping cart only have a drill in it? In either case, Alice or Bob is going to be disappointed—there is no way to avoid that. But we need some way to resolve the conflict deterministically. We do not want to have to send a message to both Alice and Bob that says “sorry, our shopping cart cannot handle your request. Please try again later.”
We need the shopping cart to be a conflict-free shopping cart—and today’s episode is about the different techniques that can be used for conflict resolution.
The shopping cart is a simple example where user collaboration leads to conflicts. Imagine all the other ways you collaborate with other users: chat systems like Slack, social networks like Facebook, document systems like Google Docs.
One way to resolve a conflict is through a technique called operational transform. Operational transform requires all the operations in the distributed system to be funneled through a centralized server. When a conflict occurs, the centralized server detects the problem and figures out how to resolve it.
Google Docs uses operational transform to resolve the frequent conflicts that occur when two users are sharing a text document. But operational transform only works if you have a centralized server.
An alternative solution is conflict-free replicated data types, which maintain each user’s replica of the data in a format that allows the client copies to resolve conflicts in a peer-to-peer fashion—without a centralized server.
Last example: Alice and Bob are now collaborating on a document that uses a CRDT data structure under the hood. Whenever they send their local changes to each other, any conflicts that occur can be resolved directly on the client. Alice and Bob can collaborate on a document just like they can send emails to each other.
With CRDTs, we can build decentralized, collaborative applications. But CRDTs are hard to use. Just like with blockchain technology, we don’t yet have the simple, elegant abstractions that let inexperienced programmers build peer-to-peer applications without the fear of conflicts.
Martin Kleppman is a distributed systems researcher and the author of Data Intensive Applications. Martin is concerned by the centralization of our computer networks, and he works on CRDT technology in order to make it easier for people to build peer-to-peer applications.
Most of the people who know how to build systems with CRDTs are distributed systems PhDs, database experts, and people working at huge internet companies. How do you make developer-friendly CRDTs? How do you allow random hackers to build peer-to-peer applications that avoid conflicts? Start by making a CRDT out of the most widely used, generalizable data structure in modern application development: the JSON object.
In today’s episode, Martin and I talk about conflict resolution, CRDTs, and decentralized applications. This is Martin’s second time on the show, and his first interview is the most popular episode to date. You can find a link to that episode in the show notes for this episode, or you can find it in the Software Engineering Daily app for iOS and for Android. In other podcast players, you can only access the most recent 100 episodes. With these apps, we are building a new way to consume content about software engineering. They are open-sourced at github.com/softwareengineeringdaily. If you are looking for an open source project to get involved with, we would love to get your help.
The post Decentralized Objects with Martin Kleppmann appeared first on Software Engineering Daily.

Dec 7, 2017 • 41min
Serverless Applications with Randall Hunt
Developers can build networked applications today without having to deploy their code to a server. These “serverless” applications are constructed from managed services and functions-as-a-service.
Managed services are cloud offerings like database-as-a-service, queueing-as-a-service, or search-as-a-service. These managed services are easy to use. They take care of operational burdens like scalability and outages. But managed services typically solve a narrow use case. You can’t build an application entirely out of managed services.
Managed services are scalable and narrow. Functions-as-a-service are scalable and flexible.
With managed services, you make remote calls to a service with a well-defined API. With functions-as-a-service, you can deploy your own code. But functions-as-a-service execute against transient, unreliable compute resources. They aren’t a good fit for low latency computation, and the code you run on them should be stateless.
Managed services and functions-as-a-service are the perfect complements.
Managed services provide you with well-defined server abstractions that every application needs—like databases, search indexes, and queues. Functions as a service offer flexible “glue code” that you can use to create custom interactions between the managed services.
The term “serverless” is used to describe the applications that are built entirely with managed services and functions as a service.
Serverless applications are dramatically simpler to build and easier to operate than cloud applications of the past. The costs of managed services can get expensive, but the costs of functions as a service can cost 1/10th of what it might take to run a server that is handling your requests.
Whether the size of your bill will increase or decrease as your company becomes “serverless” is less of an issue than the fact that your employees will be more productive: serverless applications have less operational burden, so developers spend more time architecting and implementing software.
It has been 5 years since the Netflix infrastructure team was talking about the aspirational goal of a “no-ops” software culture. Your software should be so well-defined that you do not need regular intervention of ops staff to reboot your servers and reconfigure your load balancers. Serverless is a newer way of moving operational expense into capital expense.
Today’s guest Randall Hunt is a senior technical evangelist with Amazon Web Services. He travels around the world meeting developers and speaking at conferences about AWS Lambda, the functions as a service platform from Amazon. Randall has given some excellent talks about how to architect and build serverless applications (which I will add to the show notes), and today we explore those application patterns further.
Show Notes
Serverless Services – Randall Hunt
Randall Hunt at AWS Summit Seoul
Serverless, What is it Good For? Randall Hunt
The post Serverless Applications with Randall Hunt appeared first on Software Engineering Daily.

Dec 4, 2017 • 1h 5min
Serverless Scheduling with Rodric Rabbah
Functions as a service are deployable functions that run without an addressable server.
Functions as a service scale without any work by the developer. When you deploy a function as a service to a cloud provider, the cloud provider will take care of running that function whenever it is called.
You don’t have to worry about spinning up a new machine and monitoring that machine, and spinning the machine down once it becomes idle. You just tell the cloud provider that you want to run a function, and the cloud provider executes it and returns the result.
Functions as a service can be more cost effective than running virtual machines or containerized infrastructure, because you are letting the cloud provider decide where to schedule your function, and you are giving the cloud provider flexibility on when to schedule the function.
The developer experience for deploying a serverless function can feel mysterious. You send a blob of code into the cloud. Later on, you send a request to call that code in the cloud. The result of the execution of that code gets sent back down to you. What is happening in between?
Rodric Rabbah is the principal researcher and technical lead in serverless computing at IBM. He helped design IBM Cloud Functions, the open source functions-as-a-service platform that IBM has deployed and operationalized as IBM Cloud Functions. Rodric joins the show to explain how to build a platform for functions as a service.
When a user deploys a function to IBM Cloud Functions, that function gets stored in a database as a blob of text, waiting to be called. When the user makes a call to the function, IBM Cloud Functions takes it from the database and queues the function in Kafka, and eventually schedules the function onto a container for execution. Once the function has executed, IBM Cloud Functions stores the result in a database and sends that result to the user.
When you execute a function, the time spent scheduling it and loading it onto a container is known as the “cold start problem”. The steps of executing a serverless function take time, but the resource savings are significant. Your code is just stored as a blob of text in a database, rather than sitting in memory on a server, waiting to execute.
In his research for building IBM Cloud Functions, Rodric wrote about some of the tradeoffs for users who build applications with serverless functions. The tradeoffs exist along what Rodric calls “the serverless trilemma.”
In today’s episode, we discuss why people are using functions-as-a-service, the architecture of IBM Cloud Functions, and the unsolved challenges of building a serverless platform. Full disclosure: IBM is a sponsor of Software Engineering Daily.
Show Notes
IBM Cloud Functions
Apache OpenWhisk
The post Serverless Scheduling with Rodric Rabbah appeared first on Software Engineering Daily.

Nov 30, 2017 • 53min
React and GraphQL at New York Times
Are we a media company or a technology company? Facebook and the New York Times are both asking themselves this question.
Facebook originally intended to focus only on building technology–to be a neutral arbiter of information. This has turned out to be impossible. The Facebook newsfeed is defined by algorithms that are only as neutral as the input data. Even if we could agree on a neutral data set to build a neutral newsfeed, the algorithms that generate this news feed are not public, so we have no way to vet their neutrality.
Facebook is such a powerful engine for distribution, it has allowed for a rise in the number of publishers who can get their voice heard. As a result, large media companies have lost market share because Facebook has replaced their distribution.
The New York Times has always been a media company–but the standards for media consumption have shot up. Millions of people produce content for free, and that content is distributed through high quality experiences like Twitter, YouTube, Medium, and Facebook. When a page takes too long to load on NewYorkTimes.com, it doesn’t matter how good the content is–the user is going to navigate away before they read anything.
Today, the New York Times has built out an experienced engineering team. In a previous episode, we reported how the Times uses Kafka to make its old content more accessible. In today’s show, we talk about how the Times uses React and GraphQL to improve the performance and the developer experience of engineers who are building software at the New York Times.
Scott Taylor and James Lawrie are software engineers at the New York Times. In this episode, they explain how the New York Times looks at technology. The user experience on New York Times rivals that of a platform company like Facebook, and this is assisted by technologies originally built at Facebook: React, Relay, and GraphQL.
The post React and GraphQL at New York Times appeared first on Software Engineering Daily.