Improving your ASP.NET Core site's file handling capabilities – part 2 – Data migration
In part 1 of this article, I showed you how to hide file management in an ASP.NET Core application behind an interface and how to build concrete implementations of File Providers that target the local filesystem and Azure storage. With that setup, the application can work with files without knowing where they are stored. This in turn makes it easy to move away from files on the local filesystem to files in a remote location like Azure storage and also facilitates unit testing.
But how do you actually switch your data from one storage location to another? That's the topic of this article.
In part 1 I mentioned an application I work on regularly. The application started out small years ago as a single application running on a private virtual machine. However, as both demand and the complexity of the application grew, we migrated the application to run on Azure. It now runs on multiple instances of Azure App Services. The move from a single instance to multiple Azure instances introduced some challenges with the files that we had stored locally. We had a lot of code that worked with the filesystem directly, using the classes in the System.IO namespace. But that became an issue with our multiple frontend servers each requiring access to the same files. To solve this issue, we defined the following high-level steps:
- Abstract file management out to an IFileProvider interface
- Update existing code to target IFileProvider instead of working with the filesystem directly
- Inject the FileSystemFileProvider as a concrete instance for IFileProvider
- Test
- Configure Azure storage
- Migrate content
- Inject the AzureStorageFileProvider as a concrete instance for IFileProvider
- Test
I'll describe these steps in more detail in the coming sections. Note: You can find the full code that is presented below on Github here.
The IFileProvider abstraction
This has been discussed in detail in part 1 of this article. We defined the IFileProvider interface and created two concrete implementations: FileSystemFileProvider and AzureStorageFileProvider.
Update existing code
This will probably be the hardest part, depending on the size of your code base. You need to find all locations that read from and write to the filesystem directly with classes like System.IO.File and replace them with code that targets the IFileProvider. Here's an example of what to look for, it's code from a file service that targets the filesystem and adds meta data of the file to the database using a repository:
public class FileService : IFileService
{
private readonly IFileRepository _fileRepository;
private readonly string _rootFolder;
public FileService(IFileRepository fileRepository, string rootFolder) {
_fileRepository = fileRepository;
_rootFolder = rootFolder;
} public void AddFile(string rootContainer, string filePath, int ownerId,
byte[] fileContents, string contentType)
{
var fullPath = $"{_rootFolder}\\{rootContainer}\\{filePath}";
System.IO.File.WriteAllBytes(fullPath, fileContents);
var file = new File
{
MimeType = contentType,
OwnerId = ownerId,
RootContainer = rootContainer,
FilePath = filePath
};
_fileRepository.Add(file);
}
... Other code here
}
To remove the dependency on the filesystem and use the IFileProvider, the code was rewritten to this:
public class FileService : IFileService
{
private readonly IFileProvider _fileProvider;
private readonly IFileRepository _fileRepository public FileService(IFileProvider fileProvider, IFileRepository fileRepository)
{
_fileProvider = fileProvider;
_fileRepository = fileRepository;
} public void AddFile(string rootContainer, string filePath, int ownerId, byte[] fileContents, string contentType) {
_fileProvider.StoreFileAsync(rootContainer, filePath, fileContents, false);
var file = new File
{
MimeType = contentType,
OwnerId = ownerId,
RootContainer = rootContainer,
FilePath = filePath
};
_fileRepository.Add(file);
} }
So, instead of using System.IO.File directly, the service now accepts an instance of IFileProvider in its constructor and then calls StoreFileAsync on it.
In a similar way, existing code that uses the File class to read, overwrite and delete files was updated to call methods on the IFileProvider.
Inject the FileSystemFileProvider
During the refactoring of this code, we wanted to keep using the filesystem (as that's where our files still were), so we injected an instance of the FileSystemFileProvider. Here's how that would look for an ASP.NET Core application (in the Startup class). In our .NET Framework application, we use Ninject which supports a similar mechanism to configure singletons.
services.AddSingleton<IFileProvider>(
new FileSystemFileProvider(Configuration["RootFolder"]));
The RootFolder comes from the configuration file and points to the main folder containing the files of our project. That was the same folder that previously was injected into the FileService directly.
Test
With all code updated to use the IFileProvider and with FileSystemFileProvider as the concrete instance configured to be injected into controllers and services, we tested the application to ensure everything worked as expected.
Configure Azure storage
Next, we created an Azure storage account and acquired its connection string. In the sample application that comes with this article you see "UseDevelopmentStorage=true" as the connection string in appSettings.json. That's a special connection string that targets the local emulator (Azurite in my case) as discussed in part of 1 this article. In a real-world app, this will point to a full Azure storage account. You can set the connection string for the production site using Azure Key Vault, or you can set them as part of your CI/CD pipeline. For the latter topic, check out part 5 of my article series "Building and auto-deploying an ASP.NET Core application".
Migrate content
Next, it was time to migrate content. There are a couple of ways to do this. For example, you can use Azure Storage explorer to upload all files from your local filesystem into the remote storage account. In our case, we wanted a solution that was less manual and more reusable (as we had multiple environments we wanted to migrate. We also expected to run this a few times during testing). Therefore, we decided to do it with code. But, given that we had two nice IFileProvider implementations already, this was easy to do. First, we added a ClearAsync() method to the IFileProvider interface and concrete implementations. That allowed us to quickly wipe out all content in the target storage location. Whether you want such a method in your interface and whether you want to keep it after you're done migrating is up to you. There's obviously some risk involved as calling ClearAsync() will delete all your data.
With ClearAsync() implemented, the migration application was actually pretty simple. Here's the full code for a console application that migrates all files in the local filesystem to an Azure storage account. It targets the Azure emulator but all you would have to change is the connection string to target an actual remote Azure storage account:
class Program
{ // Update to match root containers you like to migrate.
private static readonly string[] RootContainers = { "Images", "Settings" }; static async Task Main(string[] args)
{ // Hardcoded file path as this is a throw-away app anyway.
var source = new FileSystemFileProvider("C:\\Files"); // Hardcoded connection string as this is a throw-away app anyway. var target = new AzureStorageFileProvider("UseDevelopmentStorage=true"); Console.WriteLine("Continuing will delete all files in the target. Do you want to continue?");
if (Console.ReadKey(true).Key != ConsoleKey.Y)
{
Console.WriteLine("Exiting");
return;
}
var count = 0;
foreach (var rootContainer in RootContainers)
{
await target.ClearAsync(rootContainer); var allFileInfos = await source.GetFilesAsync(rootContainer);
foreach (var fileInfo in allFileInfos)
{
var fileBytes = await source.GetFileAsync(rootContainer, fileInfo.Path);
await target.StoreFileAsync(rootContainer, fileInfo.RelativePath, fileBytes, false);
}
count += allFileInfos.Count;
}
Console.WriteLine($"Done copying {count} files.");
}
}
This code asks for confirmation to delete all remote files. When you confirm, it calls ClearAsync(). Then for each root container it will get the files on the source system and copy them over to the target system. Since this is mostly a throw-away app, I have hardcoded the storage location and connection string. If you want to make this more configurable, you could move these to the application's settings file.
Once we were done running this, our Azure storage account contained the exact set of files as our local storage did.
Inject the AzureStorageFileProvider
Next up was changing the app to make use of the files in the remote storage account. This was now super simple; all we had to do was inject a different instance of IFileProvider:
services.AddSingleton<IFileProvider>( new AzureStorageFileProvider(Configuration["StorageConnectionString"]));
From then on, all our code that relied on the IFileProvider abstraction used the concrete AzureStorageFileProvider which in turn used the remote storage account defined in the connection string StorageConnectionString.
In a real-world application, you could make this feature-flag driven, like so:
if (EnableRemoteStorage())
{
// Use the remote AzureStorageFileProvider version
services.AddSingleton<IFileProvider>(new AzureStorageFileProvider(Configuration["StorageConnectionString"]));
}
else
{
// Use the local FileSystemFileProvider
services.AddSingleton<IFileProvider>(new FileSystemFileProvider(Configuration["RootFolder"]));
}
That way, you can turn on the functionality at a time of your choosing, and not immediately after a release. Either will work and it all depends on your own setup and requirements.
Test
Finally, we tested the application locally. We renamed the local file folder to ensure it was no longer used. We then ran through all use cases involving file handling to ensure all files were retrieved from and stored in our remote storage account. Once we were happy with the results, we could deploy the final application and remove the local copies of our files.
Wrapping up
As mentioned in part 1, the implementation of IFileProvider and its concrete classes is a starting point and should not be seen as a full implementation. What exactly you will have to add depends on your specific requirements but it's likely you may have to add support for filtering, paging, security and other features.
If you build new functionality for IFileProvider that you like to share, send me the code or a pull request on GitHub and I'd be happy to add it to the project.
Source code
You can find the source code for this article in this Github repository.
Where to Next?
Wonder where to go next? You can read existing comments below or you can post a comment yourself on this article.
Links in this Document
Doc ID | 627 |
Full URL | https://imar.spaanjaars.com/627/improving-your-aspnet-core-sites-file-handling-capabilities-part-2-data-migration |
Short cut | https://imar.spaanjaars.com/627/ |
Written by | Imar Spaanjaars |
Date Posted | 10/31/2021 11:49 |
Listened to when writing | Hellrap by ghostemane |
Talk Back! Comment on Imar.Spaanjaars.Com
I am interested in what you have to say about this article. Feel free to post any comments, remarks or questions you may have about this article. The Talk Back feature is not meant for technical questions that are not directly related to this article. So, a post like "Hey, can you tell me how I can upload files to a MySQL database in PHP?" is likely to be removed. Also spam and unrealistic job offers will be deleted immediately.
When you post a comment, you have to provide your name and the comment. Your e-mail address is optional and you only need to provide it if you want me to contact you. It will not be displayed along with your comment. I got sick and tired of the comment spam I was receiving, so I have protected this page with a simple calculation exercise. This means that if you want to leave a comment, you'll need to complete the calculation before you hit the Post Comment button.
If you want to object to a comment made by another visitor, be sure to contact me and I'll look into it ASAP. Don't forget to mention the page link, or the Doc ID of the document.