azureHeadache on Sun, 14 Jan 2018 00:57:27
I am aware how persistent storage works in stateful services and actors. I have seen several examples of this. However, when playing in the framework, anything I save is destroyed if the cluster is turned off. This makes sense - I get it. However, I don't want this. I want the data to remain persistent.
I have considered using Azure SQL and Azure Tables as a means to backup and preload state information when a cluster goes down or up. However, I have a bunch of concerns - and lazily - I would prefer to hear from somebody who has already slain this dragon because I can't imagine it's unique to my application...it's just no tutorials seem to cover REALLY persisting state between debugging sessions. And really, I'm coming from a traditional monolithic model with Azure SQL doing its thing in the background. I'm trying to break out of that thought pattern and see other ways to achieve my biz objectives.
- When the services come up, I can either preload everything I expect to need immediate access to, or I can implement a cache-miss strategy allowing the service to hydrate organically. IF I preload, what are the ramifications of the primary and the secondary coming up simultaneously. While they hydrate, will they try to stay in sync? Seems fraught with danger.
- When writing to a ReliableDictionary, I want to persist that information in permanent storage such that if the cluster goes down (i.e. stop debugging and end the app), I can retrieve it again later. Problem is, now, I'm writing to both the dictionary AND to some external storage. I think I'd like to do the local write and then asynchronously write the "archive" record in some fashion. Is this a good use of Azure queues, or is this where another service that handles these async writes comes into play?
Micah McKittrick on Tue, 20 Mar 2018 23:50:43
Sorry to see no one has responded to this. Can I have an update your issue? Were you able to find anything useful?
Brett Davis (bmw) on Wed, 21 Mar 2018 09:49:17
First off - if your main concern is that you want to retain state between debugging sessions, this is as simple as right clicking on your service fabric project in visual studio, going to Properties and set the Application Debug Mode to Auto Upgrade instead of Remote Application.
If you are actually going to have a production scenario where your entire cluster may disappear and come back to life, it's possible that stateful services might not be the right fit for your use case. One thing you can do is to implement the built in backup/restore hooks that are overridable as part of the StatefulService class. You can find documentation and examples here: https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-backup-restore
This is what we did, we backup our service states to blob storage every 5 minutes or so. It's important to note though that a restore only happens when your application detects a "data loss" event, which it tries to do automatically. If you want to force it restore (which you'll need to do if you delete your app and then redeploy, since service fabric doesn't consider this a data loss event) there's a powershell command that will trigger a restore on the cluster. It's not ideal I know, but it's an option.
azureHeadache on Wed, 21 Mar 2018 13:59:55
I did not find a solution. I ended up using the stateless services and SQL Azure for a backing store. It's not ideal as it doesn't truly embrace the whole microservices notion, but I have broken the app into several microservices and partitioned out the data store in such a way that I can eventually move storage into each little service.
azureHeadache on Wed, 21 Mar 2018 14:07:50
I didn't know about the Application Debug Mode. THAT is hugely helpful.
No - in production, the entire cluster should never disappear. I did see the backup/restore hooks, but without knowing about the Debug Mode, I wasn't ready to make that commitment.
I like the idea of belt and suspenders, which you've implemented with blob storage. I guess I'm just concerned that a whole cluster COULD somehow suffer a catastrophe and having no backup of that data anywhere is just stomach-churning. However, as I mentioned in my response to Micah, I have structured the solution to be microservice-like with the only principal difference being the common data store. Migrating that over piece by piece shouldn't be too bad.
One last thing - since you've actually done all this - I heard there was talk of some sort of data browser in the works. Having a central data store makes it really easy with SSMS to see what's what. Sending everything out to individual services but having no simple way to visualize that data is cumbersome. Is there now a tool that allows you to browse service data a la SSMS or the Azure Portal?
Micah McKittrick on Wed, 21 Mar 2018 16:50:23
Thanks for getting back and I am happy you were able to find something that worked for you. And thank you Brett for coming in with some extremely useful information! :)