Good morning all, and Happy Monday! I was interested to know if anyone had implemented a Rate Limiting strategy for AppServices or Controllers?
I had found this library: https://github.com/stefanprodan/AspNetCoreRateLimit But looking over the documentation, it appears to be heavily driven by appSettings.json. Comparing that to how the dynamic endpoints are generated through ANZ's AppService architecture, I wasn't sure that this was a good fit.
I also saw that it used IDistributedCache
and wasn't sure how that would work in parallel with ANZ's CacheManager
.
I am mainly interested in exploring a Rate Limiting implementation against the public endpoints. I know I can prevent DDoS attacks and other
Thanks! -Brian
Please answer the following questions before submitting an issue. YOU MAY DELETE THE PREREQUISITES SECTION.
I am worrking with a potential partner for my platform, and they are asking about delivering an SDK. I would like to do this using my Azure CI/CD pipelines and build automation.
Since ANZ uses Swashbuckle to generate the Swagger / OpenAPI specification, I had found Swashbuckle.AspNetCore.Cli https://www.nuget.org/packages/Swashbuckle.AspNetCore.Cli https://github.com/domaindrivendev/Swashbuckle.AspNetCore
the following instructions that I have read in other articles, I have done the following:
dotnet new tool-manifest
dotnet tool install --version 6.3.1 Swashbuckle.AspNetCore.Cli --ignore-failed-sources
dotnet swagger tofile --output api.json bin/Debug/net6.0/MyProject.Web.Host.dll v1
unfortunately I receive the following error:
Unhandled exception. Castle.MicroKernel.ComponentNotFoundException: No component for supporting the service Abp.AspNetCore.Configuration.AbpAspNetCoreConfiguration was found
at Castle.MicroKernel.DefaultKernel.Castle.MicroKernel.IKernelInternal.Resolve(Type service, Arguments arguments, IReleasePolicy policy, Boolean ignoreParentContext)
at Castle.MicroKernel.DefaultKernel.Resolve(Type service, Arguments arguments)
at Castle.Windsor.WindsorContainer.Resolve[T]()
at Abp.Dependency.IocManager.Resolve[T]()
at Abp.AspNetCore.Mvc.Providers.AbpAppServiceControllerFeatureProvider.IsController(TypeInfo typeInfo)
at Microsoft.AspNetCore.Mvc.Controllers.ControllerFeatureProvider.PopulateFeature(IEnumerable1 parts, ControllerFeature feature)
at Microsoft.AspNetCore.Mvc.ApplicationParts.ApplicationPartManager.PopulateFeature[TFeature](TFeature feature)
at Microsoft.AspNetCore.Mvc.ApplicationModels.ControllerActionDescriptorProvider.GetControllerTypes()
at Microsoft.AspNetCore.Mvc.ApplicationModels.ControllerActionDescriptorProvider.GetDescriptors()
at Microsoft.AspNetCore.Mvc.ApplicationModels.ControllerActionDescriptorProvider.OnProvidersExecuting(ActionDescriptorProviderContext context)
at Microsoft.AspNetCore.Mvc.Infrastructure.DefaultActionDescriptorCollectionProvider.UpdateCollection()
at Microsoft.AspNetCore.Mvc.Infrastructure.DefaultActionDescriptorCollectionProvider.Initialize()
at Microsoft.AspNetCore.Mvc.Infrastructure.DefaultActionDescriptorCollectionProvider.get_ActionDescriptors()
at Microsoft.AspNetCore.Mvc.ApiExplorer.ApiDescriptionGroupCollectionProvider.get_ApiDescriptionGroups()
at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.GetSwagger(String documentName, String host, String basePath)
at Swashbuckle.AspNetCore.Cli.Program.<>c.<Main>b__0_4(IDictionary2 namedArgs) in C:\projects\ahoy\src\Swashbuckle.AspNetCore.Cli\Program.cs:line 82
at Swashbuckle.AspNetCore.Cli.CommandRunner.Run(IEnumerable1 args) in C:\projects\ahoy\src\Swashbuckle.AspNetCore.Cli\CommandRunner.cs:line 68
at Swashbuckle.AspNetCore.Cli.CommandRunner.Run(IEnumerable1 args) in C:\projects\ahoy\src\Swashbuckle.AspNetCore.Cli\CommandRunner.cs:line 59
at Swashbuckle.AspNetCore.Cli.Program.Main(String[] args) in C:\projects\ahoy\src\Swashbuckle.AspNetCore.Cli\Program.cs:line 121
I have pulled the source code for version 6.3.1 of Swashbuckle.AspNetCore.Cli, and Program.cs does some interesting things with CommandRunner
and SubCommands
, which I'm not very familiar with.
Ultimately it looks like it's trying to run the dotnet exec
process on the assembly that contains the Program & Startup for the web app.
Has anyone tried using this Swashbuckle Cli before? Is it possible to get this to work, or am I running into a wall?
Thanks! -Brian
Thanks @ismcagdas!
I let you know how it goes. -Brian
Please answer the following questions before submitting an issue. YOU MAY DELETE THE PREREQUISITES SECTION.
Hello everyone! I realize I've been silent for a little while. Life has kept me pretty busy this past month. I know I owe a few people some outstanding action items, and I will work to get those done next wek.
In the meantime, I have a question. In working with a partner / reseller on my product, their security team did an audit of my database schema, and per their security requirements, they need the AbpUsers.Username field to be stored encrypted.
I'm comfortable doing the work to support that, but I was curious if: a.) this was already implemented in a later ABP framework version OR b.) if anyone else had already considered this, and if so, how they approached it. I see in UserManager and LoginManager, that they inherit from the AbpUserManager and AbpLogInManager classes, where the methods are defined as virtual. so I should be able to simply override the methods that I need, add my encryption / decryption where need it, and move on.
Before I started down this path I wanted to check and ask.
Thoughts?
Thanks! -Brian
Hi @pliaspzero ,
just curious - are you deploying to Azure? And if so, are you using Azure Redis or are you managing your own Redis cluster?
-Brian
@Jason,
Here is my AzureTablesOnlineClientStore
class. Please note that this was originally written against ABP v4.5.0 and ANZ v6.9.0. I have not tried running this against ANZ v11.1.0.
using Abp.RealTime;
using System;
using System.Collections.Generic;
using System.Collections.Immutable;
using System.Linq;
using Azure.Data.Tables;
using BrianPieslak.ANZ.Configuration;
using Azure;
using Microsoft.Extensions.Configuration;
using Abp.Runtime.Security;
namespace BrianPieslak.ANZ.Notifications
{
public class AzureTablesOnlineClientStore : IOnlineClientStore
{
private readonly IAppConfigurationAccessor _configurationAccessor;
public AzureTablesOnlineClientStore(IAppConfigurationAccessor configurationAccessor)
{
_configurationAccessor = configurationAccessor;
}
public void Add(IOnlineClient client)
{
var id = PerformOperation<string>((tableClient) =>
{
var partitionKey = client.TenantId.HasValue ? client.TenantId.Value.ToString().PadLeft(10, '0') : "HOST";
var entity = new OnlineClientEntity()
{
PartitionKey = partitionKey,
RowKey = client.ConnectionId,
Data = SerializeObject(client)
};
tableClient.AddEntity<OnlineClientEntity>(entity);
return client.ConnectionId;
});
}
public bool Remove(string connectionId)
{
return TryRemove(connectionId, out IOnlineClient removed);
}
public bool TryRemove(string connectionId, out IOnlineClient client)
{
client = PerformOperation<IOnlineClient>((tableClient) =>
{
var resultsqueryResults = tableClient.Query<OnlineClientEntity>(ent => ent.RowKey == connectionId);
OnlineClientEntity entity = null;
IOnlineClient result = null;
if (resultsqueryResults != null && resultsqueryResults.Count() == 1)
{
foreach (Page<OnlineClientEntity> page in resultsqueryResults.AsPages())
{
foreach (OnlineClientEntity qEntity in page.Values)
{
try
{
result = DeserializeObject(qEntity.Data);
entity = qEntity;
}
catch (Exception)
{
//unable to decrypt the record so remove it
tableClient.DeleteEntity(qEntity.PartitionKey, qEntity.RowKey);
}
}
}
}
if(entity != null)
{
tableClient.DeleteEntity(entity.PartitionKey, entity.RowKey);
}
return result;
});
return client != null;
}
public bool TryGet(string connectionId, out IOnlineClient client)
{
client = PerformOperation<IOnlineClient>((tableClient) =>
{
var resultsqueryResults = tableClient.Query<OnlineClientEntity>(ent => ent.RowKey == connectionId);
IOnlineClient result = null;
if (resultsqueryResults != null && resultsqueryResults.Count() == 1)
{
foreach (Page<OnlineClientEntity> page in resultsqueryResults.AsPages())
{
foreach (OnlineClientEntity qEntity in page.Values)
{
try
{
result = DeserializeObject(qEntity.Data);
}
catch (Exception)
{
//unable to decrypt the record so remove it
tableClient.DeleteEntity(qEntity.PartitionKey, qEntity.RowKey);
}
}
}
}
return result;
});
return client != null;
}
public bool Contains(string connectionId)
{
var id = PerformOperation<string>((tableClient) =>
{
var results = tableClient.Query<OnlineClientEntity>(ent => ent.RowKey == connectionId);
if (results != null && results.Count() == 1)
return connectionId;
return null;
});
return !String.IsNullOrEmpty(id);
}
public IReadOnlyList<IOnlineClient> GetAll()
{
return PerformOperation<IReadOnlyList<IOnlineClient>>((tableClient) =>
{
var resultsqueryResults = tableClient.Query<OnlineClientEntity>();
var result = new List<IOnlineClient>();
if (resultsqueryResults != null)
{
foreach (Page<OnlineClientEntity> page in resultsqueryResults.AsPages())
{
foreach (OnlineClientEntity qEntity in page.Values)
{
try
{
result.Add(DeserializeObject(qEntity.Data));
}
catch (Exception)
{
//unable to decrypt the record so remove it
tableClient.DeleteEntity(qEntity.PartitionKey, qEntity.RowKey);
}
}
}
}
return result.ToImmutableList();
});
}
private T PerformOperation<T>(Func<TableClient, Object> function) where T : class
{
var tableClient = GetTableClient();
return function(tableClient) as T;
}
private TableServiceClient GetTableServiceClient()
{
var configuration = _configurationAccessor.Configuration;
string connectionStringName = configuration["App:SignalR:OnlineClientStore:Azure:ConnectionString"];
string connectionString = configuration.GetConnectionString(connectionStringName);
if (String.IsNullOrEmpty(connectionString))
{
connectionString = connectionStringName;
}
return new TableServiceClient(connectionString);
}
private TableClient GetTableClient()
{
return GetTableClient(GetTableServiceClient());
}
private TableClient GetTableClient(TableServiceClient serviceClient)
{
var configuration = _configurationAccessor.Configuration;
string tableName = configuration["App:SignalR:OnlineClientStore:Azure:TableName"];
serviceClient.CreateTableIfNotExists(tableName);
return serviceClient.GetTableClient(tableName);
}
private class OnlineClientEntity : ITableEntity
{
public string Data { get; set; }
public string PartitionKey { get; set; }
public string RowKey { get; set; }
public DateTimeOffset? Timestamp { get; set; }
public Azure.ETag ETag { get; set; }
}
private string SerializeObject(IOnlineClient client)
{
var configuration = _configurationAccessor.Configuration;
var data = Newtonsoft.Json.JsonConvert.SerializeObject(client);
if (bool.TryParse(configuration["App:SignalR:OnlineClientStore:StoreEncrypted"], out bool storeEncrypted) && storeEncrypted)
{
data = SimpleStringCipher.Instance.Encrypt(data);
}
return data;
}
private Abp.RealTime.OnlineClient DeserializeObject(string data)
{
var configuration = _configurationAccessor.Configuration;
if (bool.TryParse(configuration["App:SignalR:OnlineClientStore:StoreEncrypted"], out bool storeEncrypted) && storeEncrypted)
{
data = SimpleStringCipher.Instance.Decrypt(data);
}
return Newtonsoft.Json.JsonConvert.DeserializeObject<Abp.RealTime.OnlineClient>(data);
}
}
}
This class is defined in my ".Core" project.
Then to use this class, in the Module of your .Web.Core project (mine is: BrianPieslakWebCoreModule), in the PreInitialize
method, I have the following:
//Online Client Cache
Type replacementOnlineCacheStore = default;
if (bool.TryParse(_appConfiguration["App:SignalR:OnlineClientStore:Azure:Enabled"], out bool signalRAzureEnabled) && signalRAzureEnabled)
{
replacementOnlineCacheStore = typeof(AzureTablesOnlineClientStore);
}
if (replacementOnlineCacheStore != default)
{
if (IocManager.IsRegistered<IOnlineClientStore>())
{
Configuration.ReplaceService(typeof(IOnlineClientStore), replacementOnlineCacheStore, Abp.Dependency.DependencyLifeStyle.Singleton);
}
else
{
IocManager.IocContainer.Register(Component.For(typeof(IOnlineClientStore)).ImplementedBy(replacementOnlineCacheStore).LifestyleSingleton());
}
}
This code is slightly more complicated than it needs to be because I support other implementations of IOnlineClientStore, such as Sql & Redis. This could be more simply implemented as an extension method. I guess I just got a little lazy =D
Then lastly, to drive my configuration via settings, you'd need to add this tag to your appsettings.json (and then override as appropriate in your appsettings.<environment>.json
"App": {
"SignalR": {
"Azure": {
"Enabled": true,
"ConnectionString": "AzureSignalR"
},
"OnlineClientStore": {
"StoreEncrypted": true,
"Azure": {
"Enabled": true,
"ConnectionString": "AzureStorage",
"TableName": "OnlineClientCache"
}
}
}
}
The first object of "App:SignalR:Azure" drives if I'm using the Azure Signalr service, and the ConnectionString attribute references a named ConnectionString in the "ConnectionStrings" portion of the configuration file.
The "App:SignalR:OnlineClientStore" object drives how my PreInitialization code wires up a replacement IOnlineClientStore service, if configured.
I hope this helps.
@ismcagdas - feel free to use any/all of this in the ANZ / ABP framework if you want, or I can submit a ticket in github and contribute my code there.
Cheers! -Brian
Hi @dexter.cunanan
what technology are you using for your host service? are you deploying using Docker, by chance?
I have seen cases where DataProtection defaults to the local file system, but still fails to work properly on Docker container instances, even if you are running just a single instance.
https://docs.microsoft.com/en-us/aspnet/core/security/data-protection/configuration/overview?view=aspnetcore-6.0
https://docs.microsoft.com/en-us/aspnet/core/security/data-protection/configuration/default-settings?view=aspnetcore-6.0
When hosting in a Docker container, keys should be persisted in a folder that's a Docker volume (a shared volume or a host-mounted volume that persists beyond the container's lifetime) or in an external provider, such as Azure Key Vault or Redis. An external provider is also useful in web farm scenarios if apps can't access a shared network volume (see PersistKeysToFileSystem for more information).
@ismcagdas - does the ABP / ANZ framework call .AddDataProtection anywhere? I can see the DataProtection assemblies referenced, but I can't find anything in the ABP source code on github that calls .AddDataProtection.
-Brian
I am familiar with this problem. This is not specific to the Azure SignalR service, but rather now the internal ANZ OnlineClientStore works.
When a client registers to your SignalR Hub, evening when using the Azure SignalR service, the ANZ framework stores the ClientId for that connected client in an OnlineClientStore class. By default, this class uses an InMemory Dictionary, which is not shared across your instances.
So even though, technically, the Client is connected through the Azure SignalR service, the sending of the message goes from Server to Client, so the Server needs to know who is getting which message when a new message is published. (typical Pub / Sub model). When ANZ Notification Publication classes try to publish a new message to a user via Notifications, the OnlineClientStore on instance B isn't shared with the OnlineClientStore on instance A.
To solve this, you need to replace the IOnlineClientStore service with something that supports a distributed store. I have developed code that supports 3 possible distributed stores:
SQL Pro - it's transactional and it's super easy to implement Con - it's in your db, so it's the slowest possible implementation, plus it's potentially transient data being stored in your db*
Redis Pro - also super simple to implement Con - could encounter timeouts due to high network I/O, and doesn't really fit with the ANZ PerRequestRedisCache design. Another con is that even if you try to use KeySpace events within Redis and keep the cache sync'd locally across multiple instances as local in-memory cache, you could still possibly encounter timing issues. So IMHO this is the least reliable.
Azure Storage Account (Tables) Pro - also transactional and super easy to implement. No timing issues, like Redis and no transient data bloat in your db Con - it's 1 more 3rd party service that you have to manage (connectionStrings, security, ...)
Are you familiar with replacing ANZ core classes on Module initialization? If not, I can try to share some of my code with you.
@ismcagdas - do you think this is something that the ABP / ANZ framework could benefit from? I'm happy to contribute.
I hope this helps! -Brian
Here is a scrubbed version of my DEV CI pipeline. I have a separate release pipeline that handles the CD aspect of my infrastructure.
As I noted before, I use a lot of variables.
# Variable 'BROWSERSLIST_IGNORE_OLD_DATA' was defined in the Variables tab
# Variable 'BuildAngularCliVersion' was defined in the Variables tab
# Variable 'BuildConfiguration' was defined in the Variables tab
# Variable 'BuildDotNetCoreVersion' was defined in the Variables tab
# Variable 'BuildEntityFrameworkVersion' was defined in the Variables tab
# Variable 'BuildNodeVersion' was defined in the Variables tab
# Variable 'BuildNugetVersion' was defined in the Variables tab
# Variable 'BuildUseGulp' was defined in the Variables tab
# Variable 'BuildUseNuGet' was defined in the Variables tab
# Variable 'ProjectName' was defined in the Variables tab
# Variable 'yarnSetVersionEnabled' was defined in the Variables tab
# Variable 'yarnVersion' was defined in the Variables tab
name: $(date:yyyyMMdd)$(rev:.r)
resources:
repositories:
- repository: self
type: git
ref: refs/heads/develop
jobs:
- job: Phase_1
displayName: Build and Package
cancelTimeoutInMinutes: 1
pool:
vmImage: ubuntu-20.04
steps:
- checkout: self
- task: YarnInstaller@3
displayName: Install Yarn Version
condition: and(succeeded(), eq(variables.yarnSetVersionEnabled, true))
inputs:
versionSpec: $(yarnVersion)
- task: CmdLine@2
displayName: Check Yarn Version
condition: and(succeeded(), eq(variables.yarnSetVersionEnabled, true))
inputs:
script: yarn --version
- task: UseDotNet@2
displayName: Use .NET Core
inputs:
version: $(BuildDotNetCoreVersion)
- task: NuGetToolInstaller@1
displayName: Use NuGet
condition: and(succeeded(), eq(variables.BuildUseNuGet, true))
inputs:
versionSpec: $(BuildNugetVersion)
- task: NuGetCommand@2
displayName: NuGet restore
inputs:
solution: $(ProjectName).Web.sln
- task: NodeTool@0
displayName: Use Node
inputs:
versionSpec: $(BuildNodeVersion)
- task: DotNetCoreCLI@2
displayName: Initialize EntityFrameworkCore
inputs:
command: custom
custom: tool
arguments: install dotnet-ef --global --version $(BuildEntityFrameworkVersion) --ignore-failed-sources
- task: DotNetCoreCLI@2
displayName: Build
inputs:
projects: $(ProjectName).Web.sln
arguments: --configuration $(BuildConfiguration) --no-restore
- task: Npm@1
displayName: Install Angular CLI
inputs:
command: custom
workingDir: src/$(ProjectName).Web.Host
verbose: false
customCommand: install -g @angular/cli@$(BuildAngularCliVersion)
- task: Yarn@3
displayName: Yarn Install
inputs:
projectDirectory: src/$(ProjectName).Web.Host
arguments: install --verbose
- task: Yarn@3
displayName: Yarn Add Gulp
condition: and(succeeded(), eq(variables.BuildUseGulp, true))
inputs:
projectDirectory: src/$(ProjectName).Web.Host
arguments: add gulp --dev
- task: CmdLine@2
displayName: Check Node & Angular/CLI versions
inputs:
script: node ./node_modules/@angular/cli/bin/ng --version
workingDirectory: src/$(ProjectName).Web.Host
failOnStderr: true
- task: CmdLine@2
displayName: Check Gulp version
condition: and(succeeded(), eq(variables.BuildUseGulp, true))
inputs:
script: node ./node_modules/gulp/bin/gulp --version
workingDirectory: src/$(ProjectName).Web.Host
failOnStderr: true
- task: CmdLine@2
displayName: Build Gulp
condition: and(succeeded(), eq(variables.BuildUseGulp, true))
inputs:
script: node ./node_modules/gulp/bin/gulp build
workingDirectory: src/$(ProjectName).Web.Host
failOnStderr: true
- task: CmdLine@2
displayName: Build Angular
inputs:
script: node --max-old-space-size=8192 ./node_modules/@angular/cli/bin/ng build --progress=false --configuration=$(BuildConfiguration) --output-path=$(Build.ArtifactStagingDirectory)/temp/wwwroot --source-map=false
workingDirectory: src/$(ProjectName).Web.Host
- task: DotNetCoreCLI@2
displayName: Publish Website
inputs:
command: publish
publishWebProjects: false
projects: src/$(ProjectName).Web.Host/$(ProjectName).Web.Host.csproj
arguments: -c $(BuildConfiguration) -o $(Build.ArtifactStagingDirectory)/temp /p:PublishProfile=$(BuildConfiguration) --no-restore
zipAfterPublish: false
modifyOutputPath: false
- task: ArchiveFiles@2
displayName: Zip API
inputs:
rootFolderOrFile: $(Build.ArtifactStagingDirectory)/temp
includeRootFolder: false
sevenZipCompression: 5
archiveFile: $(Build.ArtifactStagingDirectory)/$(ProjectName)_$(BuildConfiguration)_$(Build.BuildNumber).zip
- task: DeleteFiles@1
displayName: Delete Temp Folder
inputs:
SourceFolder: $(Build.ArtifactStagingDirectory)
Contents: temp
- task: PublishBuildArtifacts@1
displayName: Publish Web Artifacts
inputs:
ArtifactName: Web
- task: DotNetCoreCLI@2
displayName: Create SQL Migration Scripts
inputs:
command: custom
custom: ef
arguments: migrations script -i -p src/$(ProjectName).EntityFrameworkCore/$(ProjectName).EntityFrameworkCore.csproj -o $(Build.ArtifactStagingDirectory)/Migrations/$(ProjectName).Core_$(BuildConfiguration)_migrations_$(Build.BuildNumber).sql
- task: CopyFiles@2
displayName: Copy SQL files into Sql Artifact
inputs:
SourceFolder: sql
TargetFolder: $(Build.ArtifactStagingDirectory)/Migrations/
- task: PublishBuildArtifacts@1
displayName: Publish Sql Artifact
inputs:
PathtoPublish: $(Build.ArtifactStagingDirectory)/Migrations/
ArtifactName: SQL
...
The only thing that might not work out of the box for you would be the NuGet Restore command, if you happen to use a private Artifact Feed instead of all public NuGet repositories. It's fairly simple to wire those up as well, if you are using private Artifact Feeds.
This framework has worked beautifully for me years. I have added to it to include building docker images and tagging + pushing docker images to an Azure ACR. I have also added 3rd party code scanning services, such as WhiteSource and Snyk.
I can share more of those additions later if you would like, but those start to get into AZDO service connections and 3rd party service accounts, so it made sense to keep those out for now.
I also do not build or publish the .Public or .Migrator projects. I could, I just decided not to use them for this project. Instead we generate the .sql files using the ef migrations script command.
Lastly, I've been working on upgrading from v6.9.0 to a more current version for a while now. Sadly, I'm still on dotnet core 2.2. That is why you see many "Version" parameters and even executing the "gulp" command all being controlled by conditional variables.
This same pipeline should work for you for both dev branch and your master branch. I haven't worked with 1 .yml file as a template for multiple pipelines, so I basically got this .yml file as I wanted it, and then uploaded it twice, once to my DEV pipeline and once to my PROD pipeline, and configured the variables and source repo.
I hope this helps! Let me know if you have any question.
-Brian
@4Matrix - I'm wrapping up another feature request this morning so I should have time to get you a copy of my Azure pipeline shortly.
To note - I use a lot of Variables in my pipelines, so that they can be used across my DEV, UAT, and PROD environments. Are you comfortable working with Azure Pipelines and variables?
I should have something posted for you tomorrow. -Brian