In Conversation: Louis-Victor Jadavji of Taloflow

The In Conversation series is an opportunity for us to talk to the people behind Victory Square Technologies subsidiaries and give you an up close and personal look inside what it is they do and who they are.

The latest in the series is a chat with Louis-Victor Jadavji, CEO of Taloflow. Louis-Victor, one of Forbes’ 30 Under 30 in 2016, sat down with former CBC Radio host James Graham at the Taloflow offices in downtown Vancouver to talk about saving money, existential crises and precisely who Tim is.

_

James Graham: I guess the first and most immediate question I should ask is, what is Taloflow?

Louis-Victor Jadavji: Taloflow is an observability and automation platform. Taloflow reveals and optimizes the cost of every process running on the cloud. Teams can save upwards of 40% on their AWS costs and offload time-consuming DevOps responsibilities with peace of mind.

James: Now can you break that down for me in layman’s terms? How would you explain it to your parents?

LVJ: Companies today are moving to the cloud at a very rapid pace, it’s probably one of the most important computing paradigm shifts of the last few decades years. The move to the cloud has led to a lot of inefficiencies, people have been caught off guard, especially when it comes to cloud costs. Today, between the major cloud providers that you hear of, usually Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform, the run rate for those businesses altogether gets close to about a hundred billion a year. About 40% of all that spend is wasted.

James: How so?

LVJ: That spend is wasted for many different reasons. A lot of the pure waste comes from people paying for stuff they’re not using. I’m not this old, but if you look back to the dot com bubble, companies in that time, where it was a frothy time, would overprovision a lot of office space. So you have 10 people in an office of 10,000 square feet, right? Then the crash happened and they realized that wasn’t the most efficient use of space. You have a similar effect now. With a lot of technology companies and firms, we’ve given developers a lot of autonomy, which is really important. Then, the way we’ve organized our developer teams has been in terms of creating these microservices. So the patterns of organization as reflected in those services, these developers or teams are fairly small in size. Within big organizations, there are hundreds of these teams that are spinning up instances or resources on the cloud with little accountability. Usually, they spin these things up and forget they exist or spin these things up and always get the biggest size. So a lot of the waste is basically for compute that is not used at all.

James: Would you say this is because people are just being gratuitous for gratuitousness’ sake? Or is this a case of,  we’re so early in the stages of development in the usage of cloud services and cloud computing, that people just aren’t sure how big or how small to go?

LVJ: That’s a part of it. So we want to be a decision-making platform. We’re helping you make those decisions or automating those decisions as quickly as possible, so you’re always getting the biggest bang for your buck and saving where you need to. But a big part of the reason we’re in this state as an industry is because the responsibility for controlling infrastructure costs used to lie in the finance department and it still lives there today. Before the cloud, they had pretty good controls over what the IT infrastructure budget was. But now you’ve given every development team the equivalent of a credit card. They have no idea how to control these costs. The finance department goes to the CTO or the different engineering managers and tries to understand why are we spending this much on this? Then you get the walkaround. “Well you know, it’s stuff you don’t understand” and this and that, and for the lack of a better word, allocation and cost control have become very cloudy.

James: It’s taken you a bit because you’ve gone from LocoNoco which did one thing to Taloflow, which does something else. What was the Eureka moment? What was the recognition of the pain point that suggested that this is our opening, this is what we need to go and do?

LVJ: In our leadership team, we credit this change to a moment of clarity we had last summer. We had a business that was very focused on building an entirely new way to build ad-hoc integrations using rules and rules engines. We had a couple of customers using the platform and we were producing some decent revenue for a few-months-old company. Our customers started complaining about cloud costs and at the same time, we came to the realization that the business we were going to go after was going to be a grind due to the customization required for each incremental customer.

So we had an existential crisis during the summer of 2018. I remember we had a phone call in August with the whole team and it was very frank. It’s like, look, here’s the data. This is obviously not going to work. But, our customers have been complaining about cloud costs. We started helping them with that and then we found there’s a huge pain point here because we’re spending three to four days at the end of every month pouring through excel sheets to figure out how to save them costs. There should be a way to automate this, so I’m going to try all the tools available. It didn’t work well and we got to a point where it was a very much a case of “knowing this, what’s the go-forward plan?”

So we made the decision to pursue the cloud idea, we restructured the team accordingly. We formally pivoted on September 1, 2018. Within six to eight weeks, we had an alpha that was used by one of our customers to start saving cloud costs. Then we opened up a beta program and started getting a lot of interest for it. It started January 1 and it’s going to run through March. We find that our ability to show cost-savings so quickly will make for a very frictionless sale.

What we also realized is that a lot of the teams we were working with a year ago, they wouldn’t be thinking about solving this type of problem. But now it’s come to a head where engineers are feeling the pressure from the finance department and are looking for a way out; a way to automate a lot of the manual grunt-type work of allocating costs and continually looking for where to save.

James: Nobody likes to do the math.

LVJ: No one wants to do the math, but the time for saving money seems to be this year. It might be a change in sentiment in the market, especially as people are fearful of a bear market cycle coming about. You know we are there when people start to worry about whether growth at all costs or achieving profitability is the right strategy? The first area to trim for most companies in the tech industry is cloud costs.

James: So who is Tim?

LVJ: Tim is the friendly way to name our product. It’s the Taloflow Instance Manager, but we’ll just call him Tim from now on. It’s a bot that is your co-pilot or autopilot for cloud cost savings and the automations related to those things.

James: How difficult have you found it with the beta testers for people to sort of wrap their head around using bots and AI as an integrated part of their business?

LVJ: So it’s equal parts observability and automation. We’re working with very sophisticated engineers when we approach companies in the tech industry. These engineers like to make decisions and define the rules. From there, we can automate knowing the patterns of behavior. So the way we see Tim’s progression is that there’s an emphasis on information, monitoring, alerting, dashboarding, and observability. We’re calling all this observability, end to end. Can I make a decision with the information presented on screen and pinpoint root causes? So that’s a big part of Tim’s focus today.

But going forward, because of the monitoring we detect savings opportunities, so we can offer quick recommendations. Based on how we engage with developers or engineers on a one-to-one basis, how they react to those recommendations and whether they approve them or disapprove them – all that goes into our feedback loop. That’s a simple flow to eventually automate those decisions.

James: Is that feedback loop case specific to just my account or is that going to have any strength and relevance to what Tim does?

LVJ: It depends, at least within an organization. If I’m a developer and I work from nine to five and Tim notices that you have this resource running but you’re not at work, it’ll shut it off. But for another developer, the patterns of office hours might be quite different. Sometimes these workloads can be sensitive and so not only does the developer have to approve it, but so does their manager and a few other people who are involved with that specific resource.

Tim is smart enough and understands the decision flow. Tim is not just an intelligence layer, it’s also an engagement layer. So the intelligence layer, that’s all about the simulations. We run the predicted usage and so on. The engagement layer is equally important because that allows us to add that one-to-one relationship with each developer. Five years ago that engagement layer wouldn’t have been as important as it is today by far because five years ago, it was feasible to have someone in a centralized setting at a dashboard to identify cost-saving opportunities.

James: Different structural concepts. 

LVJ: Exactly, and that’s because of the microservice patterns. So teams used to work around monoliths and these decisions on resources used to be very centralized. Now they’ve become very dispersed. I’d say that the world’s kind of changed and the engagement model has to be with individual developers on a one-to-one basis and that’s a differentiator for Tim.

James: Saving people 10 billion dollars by 2025 is a big number. How did you get to that?

LVJ: The cloud is a booming space and it’s still the early days. When we speak to some Fortune or Global 2000 type companies, they say they’re on the cloud and we really dig in and find that ‘okay, you’ve got a couple of workloads.’

(Editors note: On Prem = On premises: software is installed that runs on computers on the premises of the person or organization using the software, rather than at a remote facility)

A risk to that hypothesis asks ‘where do we land’? Is the cloud really everything, or are we going to be in a hybrid cloud world where people are going to have equal parts on-prem and equal parts cloud? That’s probably more realistic, but still, if you see the accelerating run rates for each of the major cloud providers and the emergence of even secondary and tertiary providers, this is still going to be an industry that grows pretty quickly. Already the waste is anywhere between $35 and $40 billion a year based on current run rates. We’re trying to save our customers $10 billion in cloud costs by 2025. And we have a reasonable hypothesis here, which is that even that 35 to 40 percent we mentioned is probably not the whole picture for waste. That could be a lot more, but we’d have to unlock that with new kinds of automations.

James: As growth in usage continues, the positive view opens up. 

LVJ: Exactly. One of the areas we think we can really unlock a new area of savings is in not just understanding your infrastructure, but in understanding your business environment. So we would tie more into the event-stream for the business. By understanding the various business events and how they relate to marginal cost changes on the cloud, we can tell you that your cost per shipment went from three cents to five cents. You should be monitoring that and this relates directly to someone pushing out this code last week, which seems like it’s not performing as it should be. So now, you can unlock a whole new area of optimization. This is great for finance teams and engineering teams to collaborate because when you’re working at scale, monitoring the marginal cost of these processes, especially in production environments is very important.

James: Can you foresee a day where usage of Tim is applicable to other situations down the line?

LVJ: Yes. It’s very early to say this but the hybrid cloud has a different way of thinking about cost., It’s more about usage. You’re thinking about what is the most efficient way for me to allocate capacity. Whether it’s compute, storage, transfer. What is the most efficient way to allocate capacity? We found a lot of companies that are mostly on-prem because they have huge scale and don’t want to have their infrastructure on Google or Amazon. They’re still having these issues where you have a tragedy of commons: they have a set amount of resources and some teams use more than others and use them sometimes very inefficiently. So to fix that tragedy of commons is going to require a tool very similar to Tim. Potentially we can move into the hybrid space in the future, but we’d like to see how things shape out first.

James: What is cloud optimization? Is it possible to provide a more basic explanation of what it is for people?

LVJ: Of course. So if it was just cloud cost savings, the first thing I do is use the cloud less, but then that has an impact on your business. The idea is not just to freak out and cut back costs in a senseless manner. Oftentimes that could be the new gut reaction. ‘Hey, don’t improve our margins. I’m spending way too much.’ All of a sudden you start decreasing the performance of your application. So your customers start to complain about lag times and so on. Optimization is more about finding that equilibrium or that efficiency frontier. It’s like how performative do you want your application to be and at what cost? Maybe processing alone, it’s okay that it takes five seconds. But maybe it’s better to go to three seconds as long as the incremental or marginal increase in cost is palatable. That’s why it’s an optimization: you’re trying to save money without compromising performance. 

James: How fast are people going from insight to action and in comparison? How fast should they be going?

LVJ: There’s a lot of dashboarding tools out there for cloud monitoring but they require quite a lot of digging. Engineers are expensive and their time is very expensive. We found that when companies address this cost problem, they have huge dedicated teams for cost optimization alone. We found that some of the bigger AWS clients have sometimes hundreds of developers that just focus on cost optimization. That’s a lot of money. Even smaller teams might have one or two DevOps people, but they’re spending a lot of time on this problem. So we can make their use of time way more efficient and help companies work in a way where they don’t have to struggle. It’s very hard to recruit engineers, and smart DevOps people or CloudOps people or site reliability engineers are very expensive and very hard to find. Not every company is going to recruit the good ones, because they have their pick.

James: They are the unicorns of the industry.

LVJ: Exactly. When you don’t have them or don’t have access to all the ones you need, tools like Tim help them get there.

James: How important was the partnership with AWS for you guys?

LVJ: We get referral customers from time to time and so that’s great. I wouldn’t put a ton of emphasis on it because being an Advanced Tier Partner, I think it’s more of a certification than anything. We found that because of AWS, that kind of credential has helped us harden our security. It’s a good validation point when people think, ‘I’m going to deploy Tim on my infrastructure. I’m generally comfortable with the permissions I’m giving it, but this is great as well.’ They have this badge, Amazon’s taking a look at their application, they’ve had to check everything off before getting that badge.

James: What don’t we know about Taloflow that perhaps we should?

LVJ: Taloflow is taking a very strategic approach to this problem. Companies that are on the cloud tend to spend about two to three percent of their overall AWS budget on cloud management. Cloud management is not just cost optimization, there are other areas to it like security and compliance. The way Taloflow has been designed as a platform, it’s got this great pipeline with a stream of events and these windowed events so it can ingest billions and billions of events and know how to make sense of them, which is extremely powerful. The first area we tackled, of course, is cost optimization. But that same kind of monitoring can be used for security, for compliance and for all kinds of things that can give you information about your infrastructure overall.

James: It reaches back to my previous question about where the basic applications can go besides cost optimization.

LVJ: Yes. They could go to people having stolen credentials, people not doing their permissions correctly, or regulations and compliance requirements. Compliance is going to become a very big and important area. Backup recovery- making sure that the information you have is not openly accessible. We hear of companies all the time having what are called S3 buckets, which is where they store information for clients and it’s openly available if there’s a data breach. 

James: It’s not the most immediate thing for the DevOps to think about, let alone the C-Suites to be like, ‘Oh, I should be careful about protecting our data.’

LVJ: A bit of that, and it’s just an effect of what the engineers focus on at the time. They might be rushed to push out a feature and be careless with something else. It depends on the team, but that happens more often than we think and that could destroy a business. There’s more and more information on the cloud, so if it’s not secure or if companies aren’t following best practices, we could probably identify that kind of stuff in a future version of Tim. I’m not saying that’s where we’re going now, but it’s a possible use. But something that people might not know about Tim is, that the amount of events that we’re ingesting is huge.

James: How deep down into my data can I dig that if you’re going through just a massive amount of events?

LVJ: We monitor every single state change on Amazon that you give us access to. We record everything and monitor those state changes. That’s what allows us to get really granular on cost. To give you an example, people tend to wait, three, four or five days until they understand what they’re spending on Amazon. In some cases, they wait until the end of the month. Because we’re monitoring the state changes and understand how interconnected and intertwined everything is, we know how changing something in one place will impact something somewhere else. You might think you’re saving money, but you’re increasing cost somewhere else. All these things happen. That’s why we’re real time, as real-time as it gets. 

James: Where do you see Taloflow five years from now?

LVJ: Five years from now? We are the leader in cloud management and our ambition is to be, in many ways, the meta-cloud app operator. We believe that cloud providers have an agency problem when it comes to cost and many other areas where they’re lacking in compliance and security. But specifically in cost, they have agency issue. Because of that agency issue, we think that we’re going to get to a place where people will rely on Taloflow to be the abstraction layer through which they work with the clouds. So when we’re the abstraction layer for that, that means that you’re never logging into Amazon, you’re doing everything off of Taloflow because it’s way more intuitive. 

The other thing is that we are very big on openness and transparency. It’s a big part of the company of course that we’re trying to give transparency on cost. So that openness and transparency lends itself to this idea that we have, which is to build an open-cost framework. Today, there is no standardized way for platforms of all kinds, whether they’re SaaS platforms or even the cloud providers, to report on cost or log cost in a standardized format. So we want to be able to create that framework and standardize it. That way cost can be logged in a standardized way, and there is transparency between platforms. There is some skepticism whether companies will move to a multi-cloud world, some people believe in it a lot, some people don’t. It’s really weird how that’s a very polarizing issue right now, whether we’re going to be multi-cloud or not.

What that means is, if I’m an enterprise, am I going to have Azure, AWS, and GCP running at the same time? Can I run the same workloads on Azure and AWS seamlessly as on the others? Some people believe, nope, you are going to go deep into one cloud. Some people believe that people are going to go deep in one cloud and be hybrid. Some believe they’re going to be multi-cloud and hybrid. So this is really polarizing. People don’t really have a good idea and honestly, there’s not enough data out there yet to tell what’s going to happen. But if the multi-cloud version realizes itself, there’s this idea that people are going to have you on multiple clouds because they want to avoid vendor lock-in: they don’t want Amazon gouge. To get there, it would be pretty interesting for Taloflow because these cloud providers have an agency problem. Where there’s some meta-cloud layers or this abstraction layer between the clouds, if we have a cost for each one of those clouds, then that means that we could seamlessly move workloads from cloud to cloud based on what is most efficient and what needs service. Your SLA (A service level agreement (SLA) is a contract between a service provider and the end user that defines the level of service expected) is what meets all kinds of requirements for you to run your business. So all of a sudden that’s where we’re a power player in this space. But it’s too early to tell whether that opportunity is available. 

James: What’s coming up for Taloflow that people need to be aware of?

LVJ: We’re in our beta period right now. We have all the companies we need at this point but if a company really needs us, we might consider heading another one if they’re spending a considerable amount on AWS. What people should look forward to is when we become generally available.

James: Do you have an estimated time frame on that?

LVJ: Yes, probably May, or at the latest, June.

James: Is there anything else that you would like to cover or that you would like to mention?

LVJ: What would be good to do is to highlight the M&A (mergers and acquisitions) activity in the space. It’s pretty interesting, the precedent. There’s this five-year-old company called CloudHealth. They’re a pretty broad and detailed dashboard tool but they lack a lot of the automation pieces that we’re focusing on- virtually all of them. The interesting thing with them is, they sold to VM Ware for what people are reporting was upwards of $500 million in a deal closed about two months ago. That was a recent deal. Last year Nutanix, which is publicly traded and the leader in hybrid cloud, they made an acquisition of a company called Botmetric and renamed it Nutanix Beam, so that’s another acquisition.

Botmetric is similar in terms of the cloud compliance kind of cost visibility. There’s another company or two that are doing quite well in terms of the annual run rate increasing at rapid paces. Between CloudCheckr, Cloudability, CloudHealth, there are already a few players in the space that have reached 50 million and getting close to 100 million ARR and have done so in a relatively short time period. There are some very innovative companies in the space that we admire. We feel that there’s so much activity in this space- it’s just such a big market. So many people are going to make a lot of money and there’s already precedent for that.

James: Is there room for everyone? Or are you going to see people are focusing on different aspects?

LVJ: A big reason why we’re different is that many of the tools I mentioned earlier have almost a singular focus on the finance team’s needs. Some of them start creeping into the engineering work. We feel we have a very SRE, CloudOps, DevOps first focus. The reason why we believe that’s the way to go, now that we’re at this point in the cloud cycle, is that the engineers have the levers to effectuate change, not the finance team. So if you really want to save money in a real-time world where everything’s changing constantly, you have to have that. You have to help effectuate a culture change within the engineering organization and that’s what our tool is for.

_

You can learn more about Taloflow at www.taloflow.ai

Louis-Victor Jadavji is the CEO of Taloflow. He is an entrepreneur with experience in health monitoring and 3D printing, a co-founder at Wiivv, Forbes Top 30 Under 30 in 2016, and the Globe and Mail named him a top eligible bachelor in 2018.

James Graham wants you to know about all the cool things happening at Victory Square Technologies. He also has a great radio voice.

 

Related Posts