@ismcagdas ,
To follow up on my post from last month, I have resolved the issue by reviewing & refactoring my AsyncBackgroundJob code.
I was still using a legacy implementation that called .Execute
, which wrapped calling .ExecuteAsync
in AsyncHelper.RunSync
Now my background jobs call .ExecuteAsync
directly and Hangfire handles the await
implementation natively.
This issue has not occurred since deploying that update to production.
Cheers! -Brian
@mayankm,
I am able to successfully generate EF migration scripts. However, I am using a different Task in my Azure DevOps pipeline. I am using the .NET Core task, with a command of custom
command: custom custom command: ef arguments: migrations script -i -p src/$(ProjectName).EntityFrameworkCore/$(ProjectName).EntityFrameworkCore.csproj -o $(Build.ArtifactStagingDirectory)/Migrations/$(ProjectName).Core_migrations_$(Build.BuildNumber).sql
I like variables in my pipelines, so I have the ProjectName as a variable.
once that task completes, I have 2 subsequent Azure DevOps tasks in my pipeline for "Copy SQL files into Sql Artifact" and "Publish Sql Artifact".
"Copy SQL files into Sql Artifact" is a Copy Files task
source folder: sql contents: ****** target folder: $(Build.ArtifactStagingDirectory)/Migrations/
"Publish Sql Artifact" is a Publish Build Artifacts task
path to publish: $(Build.ArtifactStagingDirectory)/Migrations/ artifact name: SQL artifact publish location: Azure Pipelines
let me know if that works for you, -Brian
Hi @dschnitt,
While I'm on an older version of ANZ and not running 10.2, I had looked into doing something like this a year ago for the AbpAuditLogs table, as well as another table that I have defined.
While I never ended up taking this to production, I built a proof-of-concept that used AmbientDataContext
I had to do a couple of things:
Here is my concrete repository class
using Abp.Auditing;
using Abp.EntityFrameworkCore;
using Abp.Runtime;
using Abp.Domain.Repositories;
using Microsoft.Extensions.Logging;
using Brian.EntityFrameworkCore.Repositories;
using Brian.EntityFrameworkCore;
namespace Brian.MultiTenancy.Auditing
{
public class AbpAuditLogsRepository : BrianRepositoryBase<AuditLog, long>, IRepository<AuditLog, long>
{
private readonly ILogger<AbpAuditLogsRepository> _logger;
public AbpAuditLogsRepository(IDbContextProvider<BrianDbContext> dbContextProvider, IAmbientDataContext ambientDataContext, ILogger<AbpAuditLogsRepository> logger)
: base(dbContextProvider)
{
_logger = logger;
_logger.LogDebug("[AbpAuditLogsRepository] : setting AmbientDataContext 'DBCONTEXT' to 'AuditLog'");
ambientDataContext.SetData("DBCONTEXT", "AuditLog");
}
}
}
here is my ConnectionStringResolver
using System;
using Abp.Configuration.Startup;
using Abp.Domain.Uow;
using Microsoft.Extensions.Configuration;
using Brian.Configuration;
using Abp.Reflection.Extensions;
using Abp.Zero.EntityFrameworkCore;
using Abp.MultiTenancy;
using Abp.Runtime;
using Microsoft.Extensions.Logging;
namespace Brian.EntityFrameworkCore
{
public class BrianDbConnectionStringResolver : DbPerTenantConnectionStringResolver
{
private readonly IConfigurationRoot _appConfiguration;
private readonly ILogger<BrianDbConnectionStringResolver> _logger;
private readonly IAmbientDataContext _ambientDataContext;
public BrianDbConnectionStringResolver(IAbpStartupConfiguration configuration,
ICurrentUnitOfWorkProvider currentUnitOfWorkProvider,
ITenantCache tenantCache,
IAppConfigurationAccessor configurationAccessor,
IAmbientDataContext ambientDataContext,
ILogger<BrianDbConnectionStringResolver> logger)
: base(configuration, currentUnitOfWorkProvider, tenantCache)
{
_ambientDataContext = ambientDataContext;
_appConfiguration = configurationAccessor.Configuration;
_logger = logger;
}
public override string GetNameOrConnectionString(ConnectionStringResolveArgs args)
{
var s = base.GetNameOrConnectionString(args);
object dbContext = null;
try
{
dbContext = _ambientDataContext.GetData("DBCONTEXT");
if(dbContext != null && dbContext.GetType().Equals(typeof(string)))
{
var context = (string)dbContext;
_logger.LogDebug($"[BrianDbConnectionStringResolver.GetNameOrConnectionString] : Found 'DBCONTEXT' of '{context}'");
//how do we ensure that there _is_ a connectionString defined for the given Context
var connectionString = _appConfiguration.GetConnectionString(context);
if(!string.IsNullOrEmpty(connectionString))
{
_logger.LogDebug($"[BrianDbConnectionStringResolver.GetNameOrConnectionString] : Found connectionString defined for 'DBCONTEXT' of '{context}'");
s = connectionString;
}
}
}
catch (Exception ex)
{
//can we log this
_logger.LogError(ex, "[BrianDbConnectionStringResolver.GetNameOrConnectionString] : An unexpected error has occurred.");
}
return s;
}
}
}
Then in my WebCoreModule, in the PreInitialize method:
public override void PreInitialize()
{
Configuration.ReplaceService<IConnectionStringResolver, BrianDbConnectionStringResolver>();
Configuration.ReplaceService<IRepository<AuditLog, long>, AbpAuditLogsRepository>();
...
Disclaimer: this code was written against a much older version of ABP & ANZ, and written in just a few hours as part of a rapid proof-of-concept just to see if this was even feasible. It's definitely not my cleanest work, and it was never code reviewed or tested for production readiness. Additionally, I never looked into how this would be handled in for data migrations within EntityFrameworkCore, so it's possible that doing this could cause issues running the .Migrator project or running your EF migrations either code-first or via SQL scripts.
I don't know if ABP / ANZ still supports or recommends using DbPerTenantConnectionStringResolver
or IAmbientDataContext
.
I hope this helps. Good luck! -Brian
Hi @ismcagdas ,
Thank you for the reply. Unfortunately, it's not possible to share the source code.
The explanation of the code is that the args.RequestId
is a record in a table. There is a second table that identifies the list of documents to be zip'd up as part of this request. So the BuildZipFileAsync
method takes the requestId, gets the list of documents to be zip'd up for this request, and then iterates over the list.
For each document, get the Stream of the document from our storage provider (Azure Blob Storage) and copy the stream into the zipfile as a new entry.
The method also compiles a "readme.txt" that lists all of the documents and some additional metadata about each file.
Ultimately that zipfile is retained as another Stream, which is sent back to our storage provider for persistent storage, and then the id of that new file is what is passed on to the notification so that the user notification can reference the zipfile.
The only additional code that I have in place is for handling the streams. Since these zipfiles could contain hundreds of files, I didn't want to deal with potential memory pressure issues. So try to never hold onto a file as a MemoryStream. Instead I use FileStreams in a dedicated temporary folder on that node. I have this wrapped in a StreamFactory
class. I do associate the streams that are in-use with the UnitOfWork, so when that UntOfWork is disposed, I ensure that those FileStreams are 0'd out if I can, and that they are properly disposed of.
This StreamFactory code & strategy has been in place for years, and is used extensively throughout my application, so if this were the root cause of my issue, I would expect to be seeing similar issues in other features of my application. So I'm doubtful that this is it, but I also don't want to rule anything out.
I did some testing within my company yesterday with regards to concurrency, where we triggered ~10-15 of these requests all at once, and we did not observe any issues. So far this issue has only appeared in production. I have 2 non-production Azure-hosted environments plus I have my local laptop environment which I can expose to other users through ngrok.io, and we can't reproduce this issue anywhere else, which leads me further towards something environmental.
For what is actually being committed in that await uow.CompleteAsync();
statement, at the RDBMS level, I am inserting 1 new record into 1 table, and updating another record in another table with the ID (FK). (Basically - add a new document, and then tell the "request" what document represents the zipfile that was just generated). So the SQL workload should be extremely lightweight.
Recognizing my older versions, I'm looking at upgrading ABP from v4.10 to v4.21, but in the collective release notes, I'm not seeing anything that would affect this. I'm also looking at upgrading Hangfire from v1.7.27 to v1.7.35, but again I don't see anything that would affect this.
I had been running ABP v4.5 for a very long time, working on a plan to upgrade to v8.x "soon".
Earlier this year, I ran into a SQL connectionPool starvation issue in PROD, and determined that it was caused by using AsyncHelper.RunSync
. Increasing the minThreadPool size resolved the immediate issue, and that's where I decided to upgrade from v4.5 to v4.10, as you did great work at that time to reduce the usage of AsyncHelper.RunSync
.
I am going to continue to look at the health of our Azure SQL database, to see if we have anything interfering that may cause the UnitOfWork to hang. I am also going to continue to look at ConnectionTimeout, CommandTimeout, and TransactionTimeout settings, as well as my Hangfire configuration. Honestly I don't mind if the zipfile creation fails. I can implement retry logic here for resillience. What bothers me is that the Transaction seems to hang indefinitely, where Hangfire thinks the running job has been orphaned, and queues another instance of the same job and this hanging job locks/blocks other instances of the same job.
Thanks again for the reply. Let me know if anything else comes to mind.
-Brian
Hi @rickfrankel ,
I realize that I'm very late in responding to this topic, but I was curious if your solution using the ABP RedisOnlineClientStore
was working for you.
While I was working on the initial AzureTablesOnlineClientStore
implementation, I had also written my own RedisOnlineClientStore
implementation but ran into challenges with Network I/O and concurrent request load in my Azure Redis cache, which is why I decided to move towards Azure Tables instead. Have you found any network I/O or load issues connecting to your Redis cache using the RedisOnlineClientStore
?
I do recognize that in my AzureTablesOnlineClientStore
implementation, I did not add a timestamp or any mechanism to clean out the table for old or stale connections.
If the ABP RedisOnlineClientStore
implementation works for you, that's awesome. I'll have to look into switching to that store instead.
Cheers and Happy Coding! -Brian
Thank you very much @ismcagdas
I will give this a try and let you know how it goes. -Brian
I'm sorry @ismcagdas. I misunderstood you.
I was working with a brand new AspNet Zero project. Yes - I will try creating a brand new empty Asp.Net Core Web project, add the Swashbuckle libraries to it and then see if I can get their Cli to work with that example.
I will follow-up and let you know it goes. -Brian
Thanks @ismcagdas !
I wasn't aware of those capabilities. That's awesome.
Learning something new in ABP / ANZ! -Brian
Hi @shedspotter,
If you are running your application locally or using just a single instance/service/server, you can access that Logs.txt file through the HOST Administration interface, under Administration > Maintenance (WebLogs tab)
-Brian
Hi @RenuSolutions,
SMTP settings aren't configured in a file. These are settings configured through the User Interface
Here is the ANZ documentation: https://docs.aspnetzero.com/en/common/v11.2.0/Features-Angular-Host-Settings#email
Once you have those settings configured, there should be a "Test Email" capability at the bottom of that Administration > Settings page. Once you have a valid test email sent and delivered, that should be confirmation that your environment is now configured to create a new Tenant.
Cheers, -Brian