Brains Byte Back

Building trust across clouds: Expert insight on how AI cloud-native MFT platforms are empowering businesses (Brains Byte Back Podcast)

For modern, data-driven organizations, managing data effectively is an ongoing challenge. 

On one hand, data needs to be able to flow through the organization for both immediate operational uses and longer-term analytic goals. Yet on the other hand, these sprawling digital infrastructures are much harder to secure. 

As a result, the impact of cybercrime is expected to cost organizations a massive $10.5 trillion in 2025. The problem is exacerbated by the fact that breaches go undetected for an average of 212 days and take another 75 days to contain.

Companies are keen to adopt cloud technologies to aid the growth and agility of their business. However, this means that securing the perimeter is no longer just a case of monitoring on-premise systems and networks. 

However, the cause could also offer us the cure. 

Managed file transfer (MFT) systems help to move data safely and securely internally and externally and protect messages and communications between the organization and its partners, suppliers and customers. They bring reliable, automated governance to the movement of files inside and outside the business and can accelerate big data movements around the globe.

What’s more, when combined strategically with cloud native platforms, MFTs are the secret to keeping data in flow and supporting real-time intelligent insights. 

In this episode, we pick the mind of Oded Nahum, Global Head of Cloud Practice at Ness Digital Engineering (Ness), to get a deeper understanding of just why MFTs are so important to modern cloud infrastructures.  

Nahum leads a talented team of cloud architects, consultants, and engineers to deliver cutting-edge cloud solutions across AWS, Azure, Google Cloud Platform (GCP), and hybrid environments at Ness

With deep experience in cloud-native architectures, IT automation, cloud security, and infrastructure-as-code (IaC), Nahum works to leverage the power of the cloud to unlock new business capabilities and integrate AI-driven solutions that enhance operational intelligence. 

In this episode, Nahum will take a deep dive into the benefits of MFTs when used in a cloud native platform in order to keep data flowing across the organization without compromising on security. 

“Cloud-native MFT is not just an endpoint solution but a strategic capability. And its full value is unlocked when treated as part of a broader data exchange strategy.” 

You can listen to the full episode below, or on SpotifyAnchorApple PodcastsBreaker,, Google PodcastsStitcherOvercastListen NotesPodBean, and Radio Public.

Find out more about Oded Hahum here.

Reach out to today’s host, Erick Espinosa – erick@sociable.co

Transcript:

Oded Nahum:
Hi everyone, my name is Oded Nahum. I work for Nest Digital Engineering. Currently my role is the global head of the cloud and streaming, data streaming practice. My team is a consulting arm that works alongside our engineering friends at Nest. We tend to work with many clients that try to solve complicated and challenging problems when it comes to cloud, anything from adoption to strategy to optimization, to figuring out which technologies to use. So we get to touch the very core of systems of the clients that we work with.


A lot of time we, you know, push their imagination and boundaries a little bit more about how to maximize the value of what cloud computing can do. And this is really kind of the way that I approach cloud — it’s a value enabler on its own. It’s just a platform. It’s what you do with it and how you understand the business problems that you’re trying to solve for your clients that defines the way that we operate, what we’re building for our clients.

Erick Espinosa:
Perfect. I’m excited to dive deeper into this. Oded, thank you. First, I want to thank you for taking the time to join me today for this episode. I’m sure your insight will be very valuable for the variety of listeners out there that are listening right now, especially those that are looking to build their company’s AI infrastructure.

We’re seeing organizations increasingly double down on AI and data driven strategies, and the numbers show this specifically. And I’m going to quote here KPMG report: over the next year, 68% of companies are planning to invest between 50 million and 250 million. So this is companies of all sizes. This is having a big impact on platforms like AWS and Azure and how they are evolving specifically. Can you talk to us a little bit more about that?

Oded Nahum:
Yeah. So AI, I’ve kind of looked at it as like when this hype kind of picked up, I was like, wait, why are you guys getting excited? We’ve been doing it for about 10 years, right? But actually what triggered this thing is not AI, it’s Gen AI, right? AI and artificial intelligence, machine learning, have existed for quite some time. And we’ve been in that space — pretty much every kind of data engineering and analytics workload uses some of these things.

What made this thing so interesting is the Gen or the generative AI, which basically allows these models to start creating things that did not exist before. And that opened up this big Pandora’s box that we’re trying to figure out now what to do. There’s a huge amount of excitement, right? And everybody’s trying to figure out: how do I use it? What’s the value? Where do I put it to work?

Many companies that we work with are trying various different things. And I think we’re all in a certain way looking at, okay, what is the actual business value that I can bring? Where do I go from a pilot to actually get this thing to work? I think we’re in the last six months seeing positive signs of the technology and the use cases. It’s going from just a crazy hype to — there’s real value that we can create. It comes with a whole set of challenges, right? Don’t get me wrong. This is still kind of an unknown territory for many of us. All the things it can do and the challenges that it brings — not just from a knowledge perspective, but from adoption — you’re kind of giving half your brain to a machine that you hope will get it right. And it doesn’t always do that. So we’re still in the middle of that hype and journey, but it’s becoming a little bit more clear and definitely more interesting.

Erick Espinosa:
Why would you say most enterprises are using, I guess, more cloud native AI infrastructure? What would be the benefits of this specifically?

Oded Nahum:
One of the things about AI is that it takes an enormous amount of compute power. These are specialized silicons — NVIDIA’s of the world. I mean, companies don’t normally go and invest millions of dollars setting up the infrastructure that they need to run these models. And in the cloud, it’s just very accessible. For you to bring up a workload and say, go run these things on cloud infrastructure, it becomes super easy.

I mean, that has been one of the values of cloud before Gen AI, right? The fact that you can get access to all of the latest and greatest technology with literally a click of a button without commitment of buying anything. So this becomes a consumption-based model that’s very attractive.


How do you experiment? How do you iterate? How do you test different things? You know, that’s what makes cloud the perfect place to start running AI or Gen AI workloads. For many companies, a lot of the data is on the cloud as well. And if you start building Gen AI applications — I’m not talking ChatGPT, but RAGs (retrieval augmented generation) that look at your own data — that data is in the cloud. So obviously you want to run it where your data is. 

So that’s — it’s kind of a perfect fit. Even though today we are seeing specialized cloud environments focusing on just giving you the raw power of GPU processing, they come at a better price point. They don’t have all of the manageability that AWS and Azure give us and all of the tools and UIs — you still have to do the lifting. But if you’re just looking at the price per GPU cycle, you can find interesting pricing alternatives.

Erick Espinosa:
I always used to associate GPUs with gaming. And now I’m hearing it a lot more in terms of building infrastructures for companies, because there is, I guess, when you’re thinking about a company, especially one that’s just starting to invest, you want to make sure that you have enough money to invest in this infrastructure, right? But it sounds like at least the cloud native platforms allow people to kind of enter this realm a little bit — like, you know, with their finances in mind. Do you know what I’m saying?

Oded Nahum:
Well, definitely. I wish more of them would do that, right? Understanding the cloud economics is still, I think, an industry-wide challenge. The fact that someone’s giving you access to an incredible technology — just go and use it — and it’s only like three cents per second, before you start looking at it and you get the bill at the end of the month and go, “Oh my God, what did I just do?” That is still a problem.

But if you get it right, yes, the economics is very attractive. But it is still something that we as an industry are trying to figure out — the cost modeling of cloud, right? Is it better to run it on-prem? And, you know, the last few years, we had this movement called cloud exits, where people were like, “Oh, cloud is too expensive. I’m going back to my data center.” I don’t think it’s a big phenomenon, but it does happen. And there are scenarios where you say, “You know what, for my workload, my data center is better.”

Erick Espinosa:
And a big focus recently has been MFTs, which are managed file transfers. I just learned about this. But for those that are unfamiliar, can you explain a little bit more about what that is?

Oded Nahum:
MFT is making a comeback because it’s actually not a new technology.

We’ve been moving files for many, many years. There are a bunch of traditional solutions that companies used to run in their data centers — big monolithic Windows-based solutions. And the problem became with these solutions is that, one, security. You mentioned security. So to keep up the security patches on these machines is sometimes challenging. There were some breaches that companies got scared of. But the biggest problem was manageability. How do you manage a system that needs to process 10,000, 20,000, 50,000 files per day?

Think of any large regulated industry that needs to work with their business partners — which are not internal business units of a company — a bank, credit companies, payment processing, insurance companies, car manufacturers. All of them have hundreds, if not thousands of business partners that continuously send files to them, whether it’s a car configuration or a loan application.
 

So essentially, you’re opening up the front door of your most secure environment and saying, “Send me 100,000 files.” That is a huge security problem that needs to be understood because security sometimes is a question of statistics, right? If I got three people in my house, I know who they are, I can vouch for them. If I got 10,000 people coming into my house, statistically, one of them might be a bad actor. So MFT started moving to the cloud because it had the scale to process a lot of that data and it had the right set of controls to create that end-to-end security structure.

Anywhere from the protocol — we use SFTP, and SFTP has been around for a long time — and I don’t know if you ever heard of the Lindy effect. It’s an interesting way of looking at it. The Lindy effect basically says the amount of time that something has survived can predict how long it will be in the future. 

Erick Espinosa:
That’s very interesting.

Oded Nahum: 

Think of Shakespeare — it’s been around for a while. Maybe it’s going to be here 100 years from now, right? That’s kind of the way, and it’s like an interesting statistical model. SFTP is a protocol, kind of survived the test of time. It’s been here for 30 years. It’s industry-accepted, and that allows us to create this communication channel between anyone in my own data center. Now, once data gets into the cloud, we can apply multiple layers of control over it, anywhere from virus scanning to PII scanning to whatever. I guess AI comes into play here because we can do some anomaly detection and start to understand patterns that might suggest that this is not what we expected to get.

How do you do it at massive scale and fully automated? Because nobody’s sitting and clicking 100,000 times to move a file from one place to the other. So that suddenly got the excitement back to how do we use these things. With companies that still rely on files, right? Whether it’s a PDF or Excel or CSV, it created a very interesting kind of structure. We’re seeing already the evolution of MFT because if you look at just data movements across organizations, there’s various other ways. There’s API, there’s event-driven architectures, and MFT fits very well into the bigger picture of data exchange platforms.

Erick Espinosa: 

So are you suggesting this is going to become like, I guess, the primary solution in terms of or the direction that most organizations are going for when it comes to transferring these types of files?

Oded Nahum: 

If the data you’re moving is a file, it is definitely the workhorse that does all of that work, the heavy lifting. More modern architectures involve streaming data. So I’ll give you an example, right? Think of an exchange, right? Capital market exchange, people trade stuff. This thing produces an enormous amount of data that’s continuously being streamed over to a system that analyzes and looks at the data. That is a streaming type of architecture. Most, not most, but definitely some of the older systems do not support the ability to stream. So what they do is they package that data in big files and then they move it. What we call the end-of-day batch processing. That’s where MFT comes in.

If the system is more modern, that it has the ability to stream data live, then we use streaming technologies like Kafka and things like this. But these two work very well together, right? And in many situations, we see both of them coexist. If the system that’s producing the data is old and it only understands files, let’s MFT it. If it can stream, let’s stream it. And both of them can be processed, both of them can be understood and analyzed and stored and put into a data warehouse and analytics and AI and all of the fun stuff we do with data.

Erick Espinosa: 

The industry for me that comes to mind would be the financial industry. Are they, because I know you guys obviously specialize in MFTs, are they like the primary, would you say the primary industry that’s kind of focusing on this one as a secure way to transfer data?

Oded Nahum: 

Yeah. Think payment processing, right? How many credit card transactions run in the US in a day? I don’t know, trillions. All of these things are aggregated through different exchanges and become files that contain these transactions. And then they need to go to a processing or whatever. So this is where you see some of these use cases. Insurance is another one that we see a lot. We get close to financial services. Loan processing. Everyone that still relies on somebody sending a file, right? Now multiply that by the scale of that industry.

Another industry we’re seeing a lot is healthcare. They’re still kind of old school, sending files around, right? Between different hospitals or clinics. And the files can be an x-ray, but it can also be an application that somebody filled.

Erick Espinosa: 

That came to mind for me because years ago, one of my first jobs was working at a bank. And the first thing I had to do was they had large filing cabinets. So all the files of these customers, I literally had to spend the whole day just filing these away. So in my mind, when I think of anything to do with filing, it’s banking, right?

Oded Nahum:

 It’s definitely the biggest industry that we’re seeing picking these things up. And for them, it’s a replacement of the old products because they had them before. They’ve been using it, but they’re old. They’re monolithic. They’re hard to manage. They break. They need to be patched. There’s security issues with them. And they’re not close to where the data is or where the data is being processed.

And that’s really where MFT in the cloud makes it really interesting because once the file comes in, there’s endless things that we can do to process that file. I can pick it up in AWS, process it and send it to Google, for example. Google BigQuery is one of the fastest and most sophisticated analytics engines. So companies want to use Google BigQuery, but MFT runs on AWS in a much better way. So these patterns work very, very well. So that’s kind of a multi-cloud play. But there’s so many things we can do with the data as it comes in that you couldn’t do when it was on-prem. I mean, that’s what gets people excited. And the imagination here is the limit for what you can do with workflows and data processing.

A lot of it, again, we mentioned security. It is a big thing. We’re connecting pretty much every MFT we build to a security system that continuously monitors the behavior of it, understanding patterns and anomalies, and trying to detect scenarios. Because if you’re getting 10,000 files a day from multiple sources that, yes, you know they’re your clients, but they’re not companies that you are in charge of their security. If they get breached and somehow that file ended up with you, now it’s your problem. So that’s where security comes in and needs to be very, very tight and strict.

Everything that comes in has to be monitored, scanned, understood, and analyzed before it goes downstream for processing. Because hackers are really good at moving across systems.

Erick Espinosa: 

And using the same tools, right? As much as we’re using AI for our benefit, they’re using it as a tool for them to hack.

Oded Nahum: 

Yeah, we’ve seen some demos and scenarios of how an infected object or file gets into an S3 bucket, like a storage thing, and starts moving laterally. So it identifies the system that it can infect itself. And then ultimately, the goal for them is to get access to some sort of credentials that they can leverage.

And it’s surprisingly easy for people with knowledge. I think what we’ve seen with clients is that they don’t understand how to build the security for these solutions. The attack surface of the cloud is massive. It’s not like a big data center with big walls, firewalls, and nothing coming in and out. Cloud, by design, needs to be accessible and exposed—not exposed, but accessible—over the network. So the attack surface is much, much larger.

Erick Espinosa: 

Out of curiosity, is it ever part of the discussion with the clients that you’re serving, in terms of if they decide, let’s say, to collaborate with another company, and they’re transferring files, do you ever advise them to do a lot more research in terms of what type of security systems the company they’re collaborating with has in place, just to make sure that everything’s protected?

Oded Nahum: 

Absolutely. And for many companies, it becomes a policy. If you want to work with me, here’s the checklist. Prove to me that you follow these processes. And there’s a whole bunch of regulations and standards that validate that you are following these best practices for security. And there are even standards that can be implemented at the technology level itself. So it’s kind of a handshake between systems that says, “I want to send you data.” It’s like, “Okay, here’s the format.” And it’s “Okay, I can approve you sending.”

We can even implement some of that in the technology itself. But yeah, these standards exist. And we’re trying to promote more and more companies to agree on these standards, because ultimately, we want these collaborations. I mean, that’s the lifeline of many companies: to be able to send data. So we have to make it accessible, workable, but not crazy complex that nobody knows how to implement this.

Erick Espinosa: 

Does this also speak to companies that integrate with platforms? Let’s say somebody’s using AWS, and it’s like a fleet management company, and they’re integrating another tool with another company into their platform. Does this also apply with them in terms of the platform, not just transferring data?

Oded Nahum: 

Usually, I mean, it’s not streamlined, put it this way. And the cloud providers don’t always like to collaborate with one another. I mean, there’s ways to do that. But it’s not an industry standard. Multicloud, right, which now everybody’s doing multicloud, is still, if you ask me, the biggest blocker of multicloud is the cloud providers themselves. Because they want to be the only one, the only game in town, right? So there’s no real standard between how to exchange information between the clouds. But there are standards and technology that we can leverage to do that. But it’s still a bit kind of a wild west.

Erick Espinosa: 

In terms of the company that you work for, right, NAS, digital engineering, the solution that you provide specifically for MFT, what makes your company stand out in comparison to other companies entering this market?

Oded Nahum: 

Yeah. So I think what’s interesting about NAS as a company, and MFT maybe more specifically, is that NAS started as an engineering firm, software development. We still are an engineering firm. A few years ago, we started investing in building capabilities around cloud, in addition to some more interesting data technologies. But ultimately, most of NAS is people who write software. So we’re coming in from this approach.

Some other cloud consulting firms come from the IT space. This is data centers, operating system servers. MFT for many years was part of the IT side. It wasn’t part of software development. And we kind of flipped that around. It’s like, yes, we understand the architecture and the infrastructure very well. But the real kicker is what we build, which I think is two things.

One, the workflow engine. We’ve customized the workflow engine that allows us to implement any kind of business logic that the customer can think of about what happens when a file comes in. File comes in to an entry point, it triggers an event, that event gets picked up, and now I need to figure out what to do with this file. You can do a lot of different things that you couldn’t do before—take the file, scan it, wait for three other files to arrive, but don’t do it if it’s two in the afternoon because I want to wait until midnight—all these crazy rules that control the behavior until that process ends and the file or files end up in their destination.

The second thing is that we built a user interface that simplifies adoption. A lot of companies approaching this may be new to the cloud. They’re used to Windows systems, click-click. Now this is all infrastructure as code, and it’s a bit of a blocker. So we built a UI that allows you to get up to speed pretty fast.

The last thing we’re building, which I think is very cool, is that the workflow engine is complicated for many people because the people who define the workflows are business people, not techies. We have our own language that we use to create these workflows. We realized that if a business guy can explain to me in plain language what they want to do, that’s a perfect use case for GenAI. Essentially, we created an interface for them to describe in human language the workflow they want to run, and the machine translates it into how it gets implemented.

So they would say: a file would come in, if the file name equals blah blah dot text, then take the file, change its name, compress it, run a virus scan on it, and move it to a database in a different dataset. Now they can explain it in language—they don’t need to explain it in code. That’s where GenAI does a really good job.

The other thing is integration. These systems never live on their own. Nobody’s moving files for fun. You move it to integrate it with data warehouses, analytics, processing. And we’re quite good at doing those integrations with other systems.

Erick Espinosa:

Integration feels like another word for collaboration. A lot of what I think software developers, engineers are known for is collaboration, working together on projects. So a lot of what you said speaks to collaboration. And you guys are in the field of that, right? In terms of transferring files safely, because you guys have years of experience doing that specifically.

Oded Nahum: 

Yeah. And understanding also the systems or the software that needs to actually understand that file and do something with it is where that integration comes in. Because if your only view is just doing the plumbing of moving files, it’s only half the picture. Taking the file, now putting them into an application or a database or an analytic system, that’s the other half where we’re building these things.

Erick Espinosa: 

Before I let you go, what are some final pieces of advice you recommend IT decision makers keep in mind for 2025?

Oded Nahum: 

Whoa, that’s a loaded question. I think the first thing that comes to mind, specifically for cloud, always look at cloud as the value enabler of your business. Try to find out what that value is, try to quantify it, and then see how you’re going to use it. Don’t go crazy all into the cloud without thinking about how that thing is going to look like and be operated. And then I’m sure you’ll be successful in doing that. Start with value, technology after.

Erick Espinosa: 

Very good advice. Oded, thanks again for joining me. If somebody’s looking to connect with you, what’s the best way to get in touch?

Oded Nahum: 

LinkedIn. There’s not too many people with my name—you can Google me or find me on LinkedIn. Oded Nahum.

Erick Espinosa: 

All right. I appreciate your time.

Oded Nahum: 

Thank you very much. Talk soon.

Disclosure: This article mentions a client of an Espacio portfolio company.

Erick Espinosa

Erick Espinosa is the host of The Sociable’s “Brains Byte Back,” a podcast that interviews startups, entrepreneurs, and industry leaders. On the podcast, Erick explores how knowledge and technology intersect to build a better, more sustainable future for humanity. Guests include founders, CEOs, and other influential individuals making a big difference in society, with past guest speakers such as New York Times journalists, MIT Professors, and C-suite executives of Fortune 500 companies. Erick has a background in broadcast journalism, having previously worked as a producer for Global News and CityTV Toronto in Canada. Email: erick@sociable.co

Recent Posts

Securing the future of healthy code: “Make it simple, scalable & a no-brainer for teams of all sizes”

A dream is often born when things get tough and tedious. While DevSecOps is a…

9 hours ago

G20 South Africa commits to advancing digital public infrastructure globally

DPI involves giving everybody electricity & internet, making them sign up for digital ID, and…

2 days ago

Nisum, Applied AI Consulting partner-up to turn the promise of AI into tangible results

Across industries, AI has been promised as the magic bullet, poised to solve different business…

2 days ago

WEF blog calls for an ‘International Cybercrime Coordination Authority’ to impose collective penalties on uncooperative nations

How long until online misinformation and disinformation are considered cybercrimes? perspective The World Economic Forum…

2 days ago

With surge in AI-generated code creates security concerns, DeepSources launches trio of autonomous AI agents for DevSecOps 

Autonomous, AI-powered employees are set to begin roaming corporate networks sooner than expected, marking the…

6 days ago

As carcinogenic chemicals from cleaning products hit the headlines, Viking Pure Solutions is protecting employees from harm

Despite the ongoing fight to reduce, reuse and recycle plastics, when it comes to environmental…

6 days ago