The Abacus approach to SAN Replication reflects its focus on customer satisfaction
There are plenty advantages of using SAN to SAN Replication for high availability, but what about how it’s implemented? In the second part of our interview with Josh Osborne, CTO of Abacus Solutions, we discuss the Abacus approach to the technology and how offering a solution not typically available to iSeries customers positioned Abacus ahead of the curve, differentiating the company from other High Availability providers. Challenges? There are a few, but not the kind you might expect. If you haven’t read the first part of this interview, you can find it here.
Is there anything you can particularly highlight about our approach to the process? Is there anything special about our Pre-work, Implementation, Post-implementation follow-up, etc.?
For us, once we implement, we work with the customer to take copies of that system and stand them up to ensure that they’re getting the results they expect on both ends. We’re testing the environments with them, not just saying “trust us that it’s done.” We want our customers to be able to run through the copies and make sure that they’re getting the results that they want.
So, we make sure that they could even run that copy as its own system?
Correct. An important point I want to highlight is that our Source and Target SAN are both production grade. It’s not like we have a “hotrod” running in Production only, and a weak copy of the storage on the other side. We could run Production on the HA side for 100% of the contract without degradation of performance.
That’s a super important distinction to make.
It is. It’s important that you do more than just copy the data. We have the capability of running a customer at 100% performance on the HA side. There’s no tiering. It’s all flash storage. We don’t do spinning disks in the HA system.
It sounds like that’s where we probably differ most when compared to other providers.
Yeah. We don’t play games with High Availability. If we’re going to take the time to do that, we’re going to give you your money’s worth.
Is there any kind of delineation of responsibilities during the process? What is the customer responsible for versus what we’re responsible for?
We manage all the replications. The customer has no responsibilities in that space. We have scheduled disk jobs on the machine that help flush objects to disk that need to run. Those disk jobs are part of the underlying magic that allow us to capture objects in memory. But that’s the only thing happening on the Production machine, everything else happens at the SAN level.
So, the only thing the customer has to worry about is coming in afterwards for testing to validate that everything is working properly?
Is there a difference for cloud customers versus premise?
We don’t offer much of this for premise customers. In the past, we’ve set it up for customers where they manage the whole thing, but part of the challenge with doing it that way is the replication process becomes reliant on their network. That can sometimes go sideways pretty fast.
So, everything you described so far would mostly apply to a cloud customer?
We don’t play games with High Availability. If we’re going to take the time to do that, we’re going to give you your money’s worth. – Joshua Osborne, Chief Technology Officer
Are there any challenges that pop up with SAN to SAN replication?
I’d say in the early days of HA SAN replication, bandwidth was a huge constraint on replication. However, with recent hardware iterations from IBM, there’s IP-compression going on within the replication. The SAN is compressing the data as it’s shipping it across the wire to the other SAN. We see almost an 8 to 1 gain in that replication performance. The IBM i data compresses nicely, and it makes bandwidth a non-issue.
What about failovers and failback management? Do you guys ever have to deal with that?
If a customer has a disaster where they need to fail over or fail back, that is a managed process for us in which we coordinate how that needs to occur with the customer.
Data corruption is often listed as a common challenge in SAN based replication. Is that something you see often?
I think that might be more of an issue in the Windows space. With IBM i, the way the storage is architected, if you have a damaged object in production, that damaged object will be replicated with the same damage on it to the Target. The SAN is really underline at the block level so whatever is written at the block level on the Production side, that’s what’s getting written to the Target side. It’s not aware of the data itself, it’s only aware of the 1s and 0s.
Do you have a lot of customers that come to you and say, “Look…I’m just looking for software-based replication”?
Yeah, because it’s what they know. It’s what they’re used to. It’s what they’ve been doing. “This is what we’ve done for ten or twenty years, and this is what we’re going to continue doing.” To which we say, yeah sure we can do that for you, but let me show you something. And then we tell them to walk down the hall and talk to their Windows guys because they’ve been doing this type of backup for a long time. The iSeries is just now starting to catch up to it.
Are you ever met with resistance or hesitance from customers when suggesting and implementing SAN to SAN replication because they weren’t sure we could pull it off?
We’ve had customers who, in the past, faced challenges with the traditional way. It took them a year to get the traditional, software-based replication working well and they still had to make compromises. But then we come in and set up the SAN based replication and do a test, and then we do six tests because they want to see it work again and again because they just can’t believe it. Their reaction is always, “Wow.”
It sounds like just showing them a quick test is convincing enough.
Yeah, we have Dev environments where we can show them how it all works. We do proof of concepts for them in our space so we can show customers how the solution will work for them. We’ll gladly set it all up and go through it if a customer is asking us to prove it. We’ll prove it.
Most of the customers that we have migrated over to this process are just in awe of it after we get there. The responses we get are “it sounded like magic when you were selling it to us, and I just had to believe, and now I understand why. I see it now.” When your foundation is in the traditional methodology of doing replication, it just seems impossible that we can deliver this level of service and this level of protection without impact.
What about the future of SAN to SAN replication? Any thoughts as to where the technology is headed?
I see the cost and scale starting to come down so much that traditional downtime and costs are a thing of the past. Flash copies and everything getting written to the virtual tape and then replicated becomes the standard solution. Whereas today, the tradition model is that the customer must plan for a set number of hours of downtime per night or week from their production machine for backups. I think that’ll all go away.
Also, as the density and cost for these solutions comes down, providers like us can offer zero downtime just as a baseline standard. Today that’s a premium service, and I think in the future that’ll just become a standard benefit of being in the cloud.
For companies that run their business operations on the IBM i platform, the traditional methods for achieving high availability can have its limitations. High operational costs, complicated hardware maintenance, and the need to schedule outages can quickly outweigh the benefits of having data backups. Abacus can help them overcome these limitations, and more, with expertly designed and implemented HA SAN solutions. If you have questions about our data replication processes or would like to see a demo, contact us today!