ASP.NET Core MVC 2

Setting up your app

 .csproj

  • Used by MsBuild
  • Used by NuGet
  • Add packages and tools packages

Program.cs

  • Called first thing when the app starts. Main() called by runtime.
  • Please note that all ASP.NET Core apps are now console apps and you define what you want to make of them.
  • Sets up the hosting enviroment into which the web app will be loaded. Methods called in order to setup the environment:
    • .UseKestrel
    • .UseContentRoot – root folder for static content
    • .ConfigureAppConfiguration – prepare the configuration for your app (config files, environment variables, applies command line args to the app.
    • .AddUserSecrets – store sensitive data outside code files
    • .ConfigureLogging – define where you want to log your runtime data (e.g. based on entries in an appsettings.json section; you can also log to console output) and whether you want to log debug data.
    • .UseIISIntegration – enables integration with IIS and IIS Express
    • .UseDefaultServiceProvider – configures dependency injection
    • .UseStartup – allows you to specify a class which defines middleware and services.
  • Template provides a call to a WebHost.CreateDefaultBuilder() which does the most typically used setup. This approach hides a lot of internal details, so we will break it down below.
  • Minimal working setup:
    public static IWebHost BuildWebHost(string[] args) {
      return new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseIISIntegration()
        .UseStartup<Startup>()
        .Build();
    }
  • Complete setup:
    public static IWebHost BuildWebHost(string[] args) {
      return new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .ConfigureAppConfiguration((hosting) =>
          config.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true);
          config.AddEnvironmentVariables();  // Load data from env. variables
          if (args != null) {
            config.AddCommandLine(args);
          }
        })
        .UseIISIntegration()
        .UseStartup<Startup>()
        .Build();    
    }

Startup.cs

  • Called right after the methods in Program.cs
  • Called by runtime.
  • Configure services to be used by the app
    public void ConfigureServices(IServiceCollection services);
    • This covers a lot of different things, from defining dependency injection bindings, setting up session, identity (authentication), MVC. Configure using options arguments. The options argument is usually extended through extension methods for specific components.
    • Services can also be used by middleware. That seems to be the reason why services are set up before the middleware. For example, we set up MVC services by calling .AddMvc() – later, in the Configure() we assign MVC to the HTTP pipeline by calling .UseMvc().
  • Configure HTTP request pipeline  – middleware (pp. 391)
public void Configure(IApplicationBuilder app, IHostingEnvironment env);
    • This is done by setting up various middleware components that interact with the HTTP request/response: developer error pages, status code pages, serving static content, using session, using authentication, using MVC and defining routes.
    • Types of middleware:
      • Content generating middleware – adds content to the response. MVC itself belongs to this category – it is assigned to the middleware via .UseMvc().
      • Short-circuiting middleware – intercepts requests before they reach the content generating middleware components and determines whether the request should be passed through (often for performance reasons).
      • Request-editing middleware: makes changes to the HTTP request for downstream components to process.
      • Response-editing middleware: makes changes to the HTTP response after all the other components are done.
    • Create a class of your own and add it to HTTP pipeline:
      app.UseMiddleware<YourMiddlewareClassHere>();

Startup.cs – some examples

public void ConfigureServices(IServiceCollection services) {
  // "using Microsoft.EntityFrameworkCore" needed for this one.
  services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer(
          _configuration["Data:SportsStoreProducts:ConnectionString"]));
  services.AddTransient<IProductRepository, EFProductRepository>();
  services.AddMemoryCache(); // Needed for using Session.
  services.AddSession();     // Needed for using Session.
  services.AddMvc();
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env) {
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();    // Additional info in development.
        app.UseStatusCodePages();           // Additional info with HTTP errors.
        app.UseBrowserLink();               // Dev only
    }    
    app.UseStaticFiles();      // Serve static files from /wwwroot.
    app.UseSession();          // Needed for using Session.
    app.UseExceptionHandler()  // Production
    app.UseMvc(routes => {     // Define routes.
        routes.MapRoute(name: "default", template: "{controller=Product}/{action=List}/{id?}");
    });

Environment variables

  • ASPNET_ENVIRONMENT: {Development, Staging, Production}
  • cmd.exe commands
    • Please note if you set via cmd.exe, you must restart the cmd.exe
    • A nice article
setx ASPNETCORE_ENVIRONMENT  "Development"    // Set value
echo %ASPNETCORE_ENVIRONMENT%                 // Get value

Routes

  • defined in Startup.Configure():
    app.UseMvc(routes => {
        routes.MapRoute(
            name: null,
            template: "{category}/Page{productPage:int}",
            defaults: new { controller = "Product", action = "List" }
        );
        routes.MapRoute(
            name: null,
            template: "Page{productPage:int}",
            defaults: new { controller = "Product", action = "List" }
        );
        routes.MapRoute(
            name: null,
            template: "{category}",
            defaults: new { controller = "Product", action = "List" }
        );
        routes.MapRoute(
            name: "",
            template: "{controller=Product}/{action=List}"
        );
    });

Entity Framework Core

Setup

  1. Create a SomethingDbContext class. Inherit from DbContext and add your properties. Return DbSet<>.
  2. Create a repository interface and a repository implementation. Make the SomethingDbContext  property private. Return results as properties.
  3. Add EF Core command-line tools to your project. Copy-paste the following line to .csproj.
    <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.0"/>
  4. Define a connection string in appsettings.json
{
    "Data": {
        "SportsStoreProducts": {
            "ConnectionString": "Server=(localdb)\\MSSQLLocalDb;Database=SportsStore;Trusted_Connection=True;MultipleActiveResultSets=true"
        }
    }
}
  • Map services provided by EF Core to your SomethingDbContext. Do this in Startup.cs.
    public IConfiguration Configuration { get; set; } // Access appsettings.json
        public void ConfigureServices(IServiceCollection services) {
            // Setting up Entity Framework Core.
            // Maps services provided by the EF Core to our ApplicationDbContext class.
            // UseSqlServer is an extension method that comes from Microsoft.EntityFrameworkCore namespace and is specific to EF Core.
            services.AddDbContext<ApplicationDbContext>(
                options => options.UseSqlServer(
                    Configuration["Data:SportsStoreProducts:ConnectionString"]));
            ...
  • Turn off ValidateScopes in Program.BuildWebHost().

Reference one project from another project

  • .csproj file
    <ItemGroup>
        <ProjectReference Include="..\SportsStore\SportsStore.csproj" />
    </ItemGroup>
  • If VSCode does not recognize any types from newly referenced project, try rebuilding the test project from terminal:
    dotnet test .\SportsStore.Tests\

Tag Helpers – built in

  • A quick list of interesting tag helpers. These are used as attributes on regular HTML elements:
    /* Model binding */
    asp-for="SomeModelPropertyName"
    
    /* Model validation errors. Both -for and -summary apply 
     * a CSS class input-validation-error to erroneous input elements. */
    asp-validation-summary="All"  // Use with div
    // Use: div.span[asp-validation-for=Name]
    asp-validation-for="Name"
    asp-validation-for="Description"
    // 
    
    /* Resource referencing (.js, .css). The paths you provide must be under /wwwroot */
    asp-href-include=""    // More on this below.
    asp-href-exclude=""
    
    /* Routing - you can use this for links, form action...*/
      asp-action="ControllerAction"
      asp-controller="Controller"
      asp-route-<action_parameter_name>="<parameter_value>"
          // E.g. asp-route-category="@category"
          //      asp-route-productPage="1"
  • some tag helpers (like HrefInclude and HrefExclude) use glob file pattern matching 
    • Characters ** means any depth and number of directories until your reach what you want.
asp-href-include="\lib\bootstrap\dist\**\*.min.css"
asp-href-exclude="**\*-grid-*, **\*-reboot*"

Tag Helpers – roll your own

  • pass in routing elements via a Dictionary. Every HTML attribute with prefix “page-url-” will be transferred into the dictionary:
[HtmlAttributeName(DictionaryAttributePrefix = "page-url-")]
public Dictionary<string, object> PageUrlValues { get; set; } = newDictionary<string, object>();

and then use this with IUrlHelper.Action():

helper.Action(PageAction, PageUrlValues)
  • Framework calls your tag helper’s Process() method:
public override void Process(TagHelperContext context, TagHelperOutput output);

View Components

  • Inherit from ViewComponent.
  • Logic is in a method:
    public IViewComponentResult Invoke();       // or
    public IViewComponentResult InvokeAsync();
  • View must be in location Views\Shared\Components\<view component name>\Default.cshtml (this is the mandatory view file name).
  • When you want to render the view component’s view somewhere on your front end, call from Razor:
    @await Component.InvokeAsync("view component name")

Relevant Files

_ViewImports.cshtml

@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers

Concepts

Model validation

[Required(ErrorMessage = "Country mandatory.")]

Model Binding

  • Skip automatic model binding by applying attribute:
    [BindNever]

Other

[UIHint("password")]
[Authorize]
[AllowAnonymous]

Relevant types

IConfigure

  • Fetches data from appsettings.json
    bool foo = (Configuration
                    .GetSection("ShortCircuitMiddleware")?
                    .GetValue<bool>("EnableBrowserShortCircuit"))
                    .Value;
  • You can inject an object of this type without declaring a DI binding – framework provides a default implementation out of the box.

TagBuilder

  • Build html tags

IUrlHelperFactory / IUrlHelper

  • Retrieve a Url Helper from ViewContext, to help you with routing.
  • IUrlHelper.Action()

ViewBag

  • passes data between controller and view.

TempData

  • part of session state feature. Temporary data store, persists until read the first time.

RouteData

  • accessible from various other types: View Components, Controllers.
  • you can get details on current route via indexers
RouteData?.Values["category"]
  • if you are using RouteData from your controllers or view components, it is a bit difficult to set up values within your unit tests. This is because controller’s or view component’s RouteData property is read-only (getter). To properly set up, you have to manually create several properties:
    NavigationMenuViewComponent target = new NavigationMenuViewComponent(mock.Object);
    target.ViewComponentContext = new ViewComponentContext() {
        ViewContext = new ViewContext {
            RouteData = new Microsoft.AspNetCore.Routing.RouteData()
        }
    };
    target.RouteData.Values["category"] = "Cat1";

JsonConvert

  • serialize and deserialize objects to and from JSON.

Controller.ModelState

IsValid;
AddModelError();

Other types

  • ConsoleTable – class to help you write out contents in a tabular format. Use with console output.

Front end stuff

Bootstrap

Font Awesome

  • Open source icons integrated into apps via as fonts.
  • Install via Bower.
  • Include in your _Layout.cshtml
<link rel="stylesheet" asp-href-include="/lib/fontawesome/web-fonts-with-css/css/*.css">
  • Find icons at:
    https://fontawesome.com/icons?d=gallery

async/await

public async Task<long?> GetPageLength ()
{
    HttpClient client = newHttpClient();
    HttpResponseMessage httpMessage = await client.GetAsync("http://apress.com");
    return httpMessage.Content.Headers.ContentLength;
}

public async Task<IActionResult> Index() {
    long? pageLength = await MyAsyncMethods.GetPageLength();
    return View(newstring[] { $"Length: pageLength" });
}

String interpolation with nameof

var products = new [] {
    new { Name ="Prod1", Price =12M, Category ="Cat1" },
    new { Name = "Prod2", Price = 112M, Category = "Cat2" },
    new { Name = "Prod3", Price = 1112M, Category = "Cat3" },
    new { Name = "Prod4", Price = 111112M, Category = "Cat4" }
};
return View(products.Select(p =>$"{nameof(p.Name)}: {p.Name}, Price: {p.Price}, Category: {p.Category}" ));

Delegates in C#

Delegate

public delegate bool Filter(Product p);
  • Delegates allow for creation of delegate types. These types have a specific signature and a name. You create new instances of those types.
  • Can be assigned lambdas and methods with appropriate signatures.
  • Two delegates that have the same signature, but belong to different delegate types are not convertible to one another. That is why we cannot assign a Func to a delegate variable.
  • Why would you use a delegate over a Func? The main reason is that you wish to give your delegate a meaningful name.
  • Please note the example below contains a brief example of an anonymous delegate. Such a delegate instance can be assigned to delegate and Func.
  • Example:
// Define the delegate types.
public delegate bool Filter(Product p);
public delegate bool NewFilter(Product p);

// Method satisfying delegate signature.
private bool FilterByPrice(Product p) {
    if ((p?.Price ??0M) >=20) {
        return true;
    }
    return false;
}

public void Foo() {
       IEnumerable<Product> products = null;
       // Initialize a delegate instance from lambda.
       Filter funcFilter = p => (p?.Price ?? 0M) >= 20;
       // Initialize a delegate instance from method.
       funcFilter = FilterByPrice;
       // Initialize a delegate instance using anonymous delegate.
       funcFilter = delegate (Product p) {
           if ((p?.Price ??0M) >=20) {
               return true;
           }
           return false;
       };
       NewFilter newFuncFilter = null;
       // Cannot assign an instance of one delegate type to  
       // another delegate type. Will not compile.
       newFuncFilter = funcFilter;
       // Use delegate
       Bar(products, funcFilter);
       // Use lambda.
       Bar(products, p => (p?.Price ?? 0M) >= 20);
 }

// Methods expects an instance of a specific delegate type.
 public void Bar(IEnumerable<Product> products, Filter filter) {
       foreach (Product product in products) {
             if (filter(product)) {
                   Console.WriteLine("Not filtered");
             } else {
                   Console.WriteLine("Filtered");
             }
       }
 }

Func

Func<TReturn>
Func<TArgs, TReturn>
Func<TArgs, TArgs, TReturn>
...
  • Func is a delegate that allows you to assign methods to it without assigning them names. In other words, someone in MSFT has already declared a delegate called Func with a specific signature. You are free to:
    • Satisfy the Func signatures with lambdas, methods and other Func delegates you provide.
    • Use it in your method signatures.
  • Can be assigned lambdas, methods with appropriate signatures and anonymous delegates.
  • You can assign to it another Func, provided the signatures are the same.
  • You cannot assign a delegate to Func, nor vice-versa, a even if they have same signatures. See above section on Delegates.
  • Have return values.
  • Example:
public void Foo() {
    IEnumerable<Product> products = null;
    Func<Product, bool> funcFilter = p => (p?.Price ?? 0M) >= 20;
    Func<Product, bool> newFuncFilter = null;
    // Assign one Func to another.
    newFuncFilter = funcFilter;
    // Use Func
    Bar(products, funcFilter);
    // Use lambda - same thing
    Bar(products, p => (p?.Price ?? 0M) >= 20);
}
// Declare you are using a Func.
void Bar(IEnumerable<Product> products, Func<Product, bool> filter) {
     foreach (Product product in products) {
         if (filter(product)) {
             Console.WriteLine("Not filtered");
         } else {
              Console.WriteLine("Filtered");
         }
     }
}

Action

Action<>              // No return value, no args
Action<TArgs>         // No return value, one arg
Func<TArgs, TArgs>    // No return value, two arg
...
  • similar to Func, but without the return value.

Web tidbits

<!DOCTYPE html>
  • Document Type Declaration
  • Informs the validators what the document type is.
  • As far as modern web browsers are concerned, it is mostly useless. It is intended to inform the browsers how to render a document in standards compliant mode. The above example will tell a modern browser to render a text/html serialization of an HTML5 document in a “standards mode”.
  • Using any other DOCTYPE risks triggering quirks mode.
  • Recommendation: put it at the the top of your documents.

Standards mode vs quirks mode

  • Quirks mode allows the browser to display web pages that predate standardization.
  • Standards mode is what the modern browsers use to display web pages that start with <!DOCTYPE html>.
  • There is also the “almost standards mode” with a small number of quirks implemented.
<meta name="viewport"
  • Viewport is the portion of the page that is visible. Page itself is rendered in a larger area, but the viewport is the portion we are viewing.
  • Useful on mobile devices.
  • Will require further looking into (not a pun).

 

HTTPS

Commands

  • Powershell command to get info on HTTPS certificate:
& "C:\Program Files\Git\usr\bin\openssl.exe" s_client -connect mail.google.com:443

Basic terms

  • Public Key Infrastructure (PKI) – am I talking to whom I think I should be talking?
  • Transport Layer Security (TLS) – is anybody eavesdropping?
  • SSL works using a concept called asymmetrical encryption: there is a public key and a private key.
    • Request signed using the public key can be read only by the holder of the private key.
    • Request signed using the private key can be read by any holder of the public key. Keep you private keys a secret!
  • Certificates are issued by Certificate Authority (CA). CA is a trusted entity that confirms the identity of a certain entity using a process of authentication (phone calls to the business, business must provide various proofs like documents etc).
    • A Root Certificate Authority is the trust anchor upon which trust in all less authoritative CAs are based.
  • Server’s certificate contains server’s public key. Client uses it to sign the replies.
    • If some other site stole let’s say Paypal.com’s certificate, that would mean the client is encrypting his messages using the PayPal’s public key, but the illegitimate receivers wouldn’t be able to decrypt the message since they would not have the private key.
  • Communicating using asymmetric encryption is slow. Using symmetric encryption is much faster. Both client and server use the same key to encrypt and decrypt the messages. In essence, asymmetric encryption is used only to establish a safe way for client and server to exchange symmetric keys.

TLS Handshake

  1. Client sends the client hello to the server. The request involves a list of cypher suites it supports (ways to encrypt the message).
  2. Establish identity: the server sends back the certificate to prove its identity. This certificate will have a chain of digital signatures belonging to CAs. This chain means that some top-level highly trusted identity has checked and bestoved the right to issue certificate to another trusted entity – this chain shows what those CA entities are.
  3. Client’s OS and browser have a list of CAs they trust. When the client receives a certificate from some server, it consults the built-in list of trusted CAs to determine whether the server’s certificate is valid and whether its coming from a legitimate source. Client also checks if the certificate is valid (e.g. dates are valid).
  4. Client then generates a Pre-Master Secret and sends it back to the server. This message is encrypted using the server’s public key that was part of the server certificate – no one knows this secret except the client and the server.
  5. Both sides use this Pre-Master Secret to generate the symmetric key to be used. They perform the calculations on their own and arrive to the same result.
  6. Client sends “Client finished”.
  7. Server sends a “Change Cipher Spec” message – it says to the client that the communication will use symmetric key from now on – the one each of them generated based on the Pre-Master Secret.
  8. Communication is done using the symmetric key from now on.

Untitled

Cypher suites

  • Please note how cypher suites work:
    • Different encryptions are used at different phases of the TLS handshake*:
      • During the initial communication, while communicating using asymmetric encryption, RSA or DHEC is used.
      • Once symmetric keys are exchanged, AES is used.

* The encryption algorithms I just mentioned (RSA, DHEC, AES) are not an exhaustive list.

Videos to watch

 

DDD – deployment scenarios

Bear in mind the following guidelines as your review the deployment options below:

  • Most important thing is to keep the domain model independent of any external first-party and third-party dependencies.
  • Deployment in this sense does not necessary mean a separate assembly per item. It can also be a smartly designed set of namespaces or a standalone folder per layer. Again, the most important thing is to separate parts of your application according to Onion architecture, so that the dependencies are managed in a predictable way.
  • All the options below were inspired by the Stack Overflow questions listed below under “Sources”.

Scenario #1 – complete autonomy:

<bc 1>
 |_ domain
 |_ application
 |_ presentation
 |_ infrastructure
<bc 2>
 |_ domain
 |_ application
 |_ presentation
 |_ infrastructure
  • Every BC is a completely separate unit of deployment – each BC is deployed as a standalone .dll.
  • This approach is very suitable for microservices.

Scenario #2 – independently deployable domain model:

domain
 |_ <bc 1>
 |_ <bc 2>
application
presentation
infrastructure
  • The benefit here is that your domain model is obviously independent of the rest of the app and can be deployed on its own.
  • In this scenario, presentation layer might be shared across multiple BCs.
  • All the BCs can share the infrastructure layer – that way you can reuse a lot of your other technical dependencies, you don’t have to abstract everything. Perhaps it facilitates faster development.
  • Personally, I don’t think this is a smart solution – application layer’s job is to encapsulate the domain model. By not including the application layer you leave the domain model exposed to the rest of the app.

Scenario #3: independently deployable domain model and infrastructure

 <bc 1>
 |_ domain
 |_ application
<bc 2>
 |_ domain
 |_ application
|_ infrastructure
|_ presentation
  • Application services layer is packaged along with the domain model – this is a natural bundling since the two constitute Application Core.
  • In this scenario, presentation layer might be shared across multiple BCs.
  • I employed this version on one of my projects – it allowed me to bundle the application services layer along with the domain layer.
  • I could not rip out the infrastructure layer because there were some dependencies from the infrastructure layer towards the rest of the application (a Big Ball of Mud monolith).
  • Presentation layer was kept in the shared part of the app simply because it is an older app and a monolith – there is a lot of infrastructure in place that deals with loading respective applications and ripping this out would not be feasible.
  • The above outline described two BCs, but I deployed only one (there was no second BC).
  • Most important thing for me was the complete independence of the Domain Model. Additionally, by managing to move the domain model into a standalone assembly I made sure it will remain independent.

Scenario #4: independently deployable domain model and infrastructure

<bc 1>
 |_ domain
 |_ infrastructure
<bc 2>
 |_ domain
 |_ infrastructure
|_ application
|_ presentation
  • Refering to the previous scenario, if I were able to break the dependencies between the infrastructure layer and rest of the application, it would have resulted in the deployment scenario pictured here.
  • This deploment scenario would lead to the central application having dependency only on the infrastructure assembly (presuming we were using only DTOs).

Sources:

  1. https://stackoverflow.com/questions/35235712/should-i-use-separate-projects-for-bounded-contexts-in-ddd-net
  2. https://stackoverflow.com/questions/35283061/what-should-i-consider-when-deciding-to-split-our-monolithic-web-application-int
  3. https://stackoverflow.com/questions/13159299/structure-of-a-single-bounded-context
  4. https://stackoverflow.com/questions/28590806/a-bounded-context-is-a-full-application?rq=1

DDD

DDD Concept Map.png

DDD Concept map (taken from Eric Evans’ DDD book).

Chapter 2 – Strategic design with Bounded Context and Ubiquitous Language

  • Point of DDD is to fight Big Ball of Mud architecture in a meaningful way. Big Ball of Mud architecture results in a large, unbound model, with an undefined domain language that overlaps.
  • Bounded Context (BC):
    • a set of concepts that make sense to be grouped together. Business decides on which concepts belong together – boundaries are determined by uncovering how the domain works.
    • One or several BCs the business considers to be of strategic, long-term value constitute the Core Domain.
  • Ubiquitous Language (UL):
    • describes the model within a single BC.
    • this, in fact, is the model. Having a UL allows you to develop the model further.
    • UL is a product of iterative talks between developers and domain experts. It is, in turn, a way for developers and domain experts to further talk about a business concept and understand each other fully.
    • It is vital to understand that UL evolves over time as our understanding of the business evolves and as the business itself changes.

Untitled

  • How to determine what goes into a BC?
    • Test the concept using the Ubiquitous Language established for this BC: if the concept survives the stringent UL test, it belongs to the BC.
    • Think about whether this concept is part of the business’ strategic initiative a.k.a. Bounded Context.
    • Perhaps the concept should naturally belong to the BC, but you are naming it wrong?
  • How can we be sure that we have a valid BC?
    • Write down a model scenario: a description of how model elements work in unison to fulfill a task. You can use this scenario as a base for writing automated acceptance tests. These acceptance tests are used to validate your BC and they go with the source code in the repo.
  • Technical details regarding BC:
    • Only one team should work on a BC. This prevents UL misunderstandings between teams and allows the team to retain control, gain better insight into the domain and further evolve its understanding and UL/model.
    • Same team can work on multiple BCs.
    • Each BC must be in its own repo. Tests should accompany the source code.

Chapter 3 – Strategic design with Subdomains

  • Subdomain is an area of expertise within a wider system.
  • Each BC should be one subdomain. Try not to split BCs into multiple subdomains.
  • Three types of subdomains:
    • Core Domain – BCs of vital strategic interest to the business
    • Supporting Domains – other business-related concepts that are important and serve to enrich the core domain.
    • Generic Subdomains – other concepts that don’t serve a direct purpose to the business but are here in a supporting role – these can be bought as off-the-shelf applications. Remember to implement them using an ACL.
  • C# namespace == subdomain.
  • Name your Bounded Contexts!

Chapter 4 – Strategic design with Context Mapping

  • Context Mapping allows for BCs to talk to each other. It is a way to integrate them.
  • In essence it is a translation between two different ULs.
  • Besides translating, it also serves to describe the relationship between the teams assigned to BCs involved.

Types of mapping

  • Shared kernel: a part of model is shared between the teams. Hard to maintain, a strain on teams, hampers further development of UL since teams are bound together and must coordinate.
  • Partnership: two teams agree that they will synchronize their ULs and all planned work. Nothing is shared, simply synchronized. Tough to maintain.
  • Customer-Supplier (Downstream-Upstream): Supplier acts as a service providing the necessary resources to the Customer. Supplier must provide resources that are of value to the Customer. This is a profitable relationship type for both teams as long as the Supplier is responsive to Customer’s real needs.
  • Anti-corruption layer (ACL): a translation layer on the customer’s side. It prevents customer’s BC becoming infected with supplier’s model.
  • Open host service (OHS): a well-defined and well-described protocol or interface through which customer can access resources provided by the supplier’s BC. It is called “open” because it is well-described and those that wish to integrate can easily do so.
  • Published Language (PL): A well-defined information exchange language. OHS serves and consumes PL. Usually in the form of JSON or XML. Think of it as an intermediary language between supplier’s and consumer’s UL.
  • Conformist: downstream team cannot sustain the effort of developing an ACL and decides to embrace the upstream’s PL.
  • Separate ways: no BC of interest exists and team decides to develops its own solution within its BC.
  • Big Ball of Mud: a growing number of aggregates cross-contaminate the model, creating unwanted dependencies. Changes cause rippling effects throughout the system, threatening to break the app. Tribal knowledge and speaking multiple ULs at once save the system from collapsing.

How to integrate?

Capture

  • RPC with SOAP:
    • Hides the underlying network behind a seemingly simple method call.
    • An efficient way to integrate, but susceptible to network latency and failure.
    • Implemented using OHS+PL and ACL.
  • RESTful HTTP:
    • Attention is focused on resources that are exchanged between BCs.
    • Similar to RPC with SOAP, but utilizes HTTP methods to facilitate communication from customer to supplier.
    • Do not expose entire aggregates, but rather “synthetic” resources that are shaped according to what the clients require. Such resources are “synthetic” from the supplier’s point of view because they do not exist within his BC – they are created for the sole purpose of supporting the customer.
    • Implemented using OHS+PL and ACL.

Capture

  • Messaging:
    • publishing BC publishes an event, subscribing BC subscribes to it. This approach doesn’t suffer from the temporal issues associated with SOAP or REST communication – there are no blocking calls.
    • Can be implemented using REST client polling an Atom feed for an ever-growing list of resources.
    • Clients can call the service BC with a command to execute some action, but the service will not respond synchronously – service will reply by publishing an event once it finishes.
    • Most important element with this approach to integration is the messaging mechanism. At minimum it must support an At-Least-Once delivery, where all messages are resent until the subscriber responds. This also means that the subscriber must be implemented in an idempotent manner: only the first message received must make an impact, all subsequent received messages must be ignored or handled in a manner that doesn’t change the subscriber’s state.
    • Subscriber should take care not to become a conformist of the event format: subscriber must simply extract and model the necessary data into what it requires.
    • What data to send as part of the event?
      • Option A: rich Domain Event. It is hard to predict what the client’s might need. There is also the question of security, perhaps not all subscribers should see all the data. With this approach, the consumers are independent after they receive the event, since they have all the data they need.
      • Option B: Consumers receive a basic Domain Event with an identifier and then query back (via a SOAP or REST) for more data. This approach allows for better control on the supplier’s side.

Chapter 5 – Tactical design with Aggregates

Aggregate rules

  1. The root Entity has global identity and is ultimately responsible for checking invariants
  2. Root Entities have global identity. Entities inside the boundary have local identity, unique only within the Aggregate.
  3. Nothing outside the Aggregate boundary can hold a reference to anything inside, except to the root Entity. The root Entity can hand references to the internal Entities to other objects, but they can only use them transiently (within a single method or block).
  4. Only Aggregate Roots can be obtained directly with database queries. Everything else must be done through traversal.
  5. Objects within the Aggregate can hold references to other Aggregate roots.
  6. A delete operation must remove everything within the Aggregate boundary all at once
  7. When a change to any object within the Aggregate boundary is committed, all invariants of the whole Aggregate must be satisfied.

Aggregates

  • Each concept in your BC will be an aggregate. Think of aggregates as a transactional consistency boundary.
  • Invariants are maintained across aggregates. Aggregate Roots have APIs through which we change internal state of an aggregate – they must never be in an invalid state. There are two approaches that you can use here:
    1. One large API on your Aggregate Root. You do not reference non-root entities, but rather change state through Aggregate Root. This approach may make for larger API on the root, which might lead you to break up your root into multiple root – such a fragmented representation might not reflect the business accurately. On the other, this approach is the simplest way to maintain invariants.
    2. Allow clients to hold transient references to non-root entities – only for one operation and within a single block of code (as discussed here). Calling non-root entities in such manner might lead to breaking invariants: Lerman and Smith solve this by having the non-root raise an event to which the root is subscribed to – this way the root can maintain invariant. Seems a bit contrived, but does lead to smaller APIs.
  • Structure:
    Untitled
  • Name of the Aggregate Root is the name of the entire aggregate. Aggregate Root must be an Entity type of object.
  • Entity vs Value object:
    • Entity has an identity and (perhaps) behaviour. Cannot be swapped for another entity since it is unique. Equivalence is determined through identity. Each entity has a history of changes it went through (even thoug we perhaps don’t maintain that history).
    • Value Object doesn’t have an identity: it can easily be swapped for another instance of the same type. Think of an Integer(5) – any Integer(5) is the same as the next one. Any change to a Value Object causes it to be destroyed and recreated.
  • Entity behaviour serves a few purposes:
    • Methods that invoke validation of object state to determine whether they fit business requirements.
    • Methods that invoke business actions to be performed on object.
    • Methods that invoke business processes involving the object.
  • You can build entities out of value objects. It might use less storage space if you are able to persist value objects inline with the entity object – this also leads to more efficient retrieval (no joins).
  • Why create multiple aggregates? Why can’t we simply lump all the concepts from a single BC into one large cluster aggregate? Large aggregates make sense from a composition standpoint: you can simply traverse entire object graph and get to any object of interest. Downsides to such approach are:
    1.  Transactional issues with concurrent data manipulation: two users make a change to different parts of the aggregate (seemingly unrelated) – this causes the entire aggregate to be persisted – however since one change is commited first, the subsequent change is considered stale and is rejected. Such cases should not happen – after all, users were trying to change different parts of the aggregate – why would they be rejected?
    2. Scaling issues: the point above about transactional problems regarding concurrency will only get worse as the number of users grows.
    3. Performance: does not make sense to retrieve a huge object graph simply to make one minor change, and then later on to persist this entire huge graph.
    4. Memory: large object graphs have a large memory footprint. Large memory footprint also strains the GC.

Aggregate – rules of thumb

  1. Protect business invariants inside aggregate boundaries.
    • Business determines aggregates by determining what must be consistent when a transaction is committed.
    • A properly designed BC modifies only one aggregate per single transaction. This also means that an application UI should pay attention so it does not allow the user to change too much, otherwise we will be forced to manipulate more than one aggregate per transaction, which leads us to a) risking transactional failures because multiple users were working on the same aggregate and b) scaling issues caused by large aggregates.
    • This does not mean we will not have use cases that span aggregates – it is perfectly fine for one aggregate to reference another – the problem is if we want to change multiple aggregates simultaneously – resort to eventual consistency here.
    • Keep an eye out for false invariants. They might fool you into creating aggregates that scope multiple entities in order to maintain something you perceived as an invariant, but which does not have any grounding in the domain itself. Such large aggregates suffer from usual problems.
  2. Design smaller aggregates: better control of transactional scoping, they are more in line with SRP principle, less transactional risk, smaller memory footprint, better scaling, faster retrieval.
    Favor modelling aggregates with value objects as much as possible. When you think about modelling a portion of an aggregate using an entity object, think to yourself: will this part change over time or can it be completely replaced? Think of the Integer(5) example mentioned earlier – most parts are easily replaceable.
  3. Reference other aggregates by identity only – this facilitates smaller aggregates because they simply lack a way to become big: all the references between aggregates are maintained by the application service or domain service – no references from the domain model, besides the aggregate’s identifier.
    No chance for a transactional failure due to two people changing different parts of the same aggregate: each aggregate is narrowly focused.
    When you require another aggregate, you have two choices:

    • both aggregates in the same BC: you can inject the relevant repository into the aggregate’s application service or the domain service.
    • aggregates in different BCs: you can call the other BC through a repository that implements a REST service call. Inject the repository to the aggregate’s application service or the domain service. Perhaps a DTO is needed to send the aggregates across the wire? Consult the Onion layer responsibilities to check where the DTOs go within the architecture.
  4. Update other aggregates using eventual consistency. Do not persist multiple aggregates within the same transaction, because it might lead to a transactional failure and at least one of the aggregates might not get persisted, thus resulting in an inconsistent state. Rather, when you do the action on Aggregate A, raise an event to the messaging bus and wait for the subscribing Aggregate B to pick up on it – thus the invariant will eventually be satisfied and your aggregates will be in a consistent state.
    Use eventual consistency to maintain consistency across aggregates in same BCs and in different BCs.

Whose job it is?

  • If you are in doubt whether to use transaction or eventual consistency, ask the domain expert: who is in charge of making sure the various aggregates are consistenly persisted? Can the user themselves consistently persist the aggregate, or is another user/system necessary to finalize the consistency? If the user themselves can make the aggregate consistenly persisted, then we can opt for transactional consistency.
  • Sometimes users will request that they authorize a certain state transition. Other times they will request things to be automated. Asking your domain expert and the team the above question – they will provide you with additional insights into the domain.

Reasons to break the rules

  • User interface convenience: a user wants to do a bulk change on a concept that belongs to a single root. We can then do batch processing of all the instances (perhaps using a loop). Such persisting can be done with transactional consistency, since in effect all we are doing is a bunch of single similar transactions.
  • User-affinity: sometimes there is no possibility multiple users will be working simultaneously on same set of aggregates. This means there might not be any real need to use eventual consistency since transactional consistency is sufficient in such risk-free case (multiple aggregates persisted in a single transaction) . Even if multiple users did change the same aggregate, we would still have he benefit of optimistic concurrency (NHibernate) and one of the aggregate instances would be rejected.
  • Query performance: querying each aggregate separately might in some cases be a burden on the database. We can then introduce a proper instance reference from one aggregate to another (as opposed to merely identifier reference).

How to design your aggregates?

  1. Write down each concept as an aggregate. Do not put additional entities into the aggregate. Flesh out the aggregate: identities, attributes that allow finding the aggregate (name, VAT, …) and other attributes required for maintaining invariants and business rules.
  2. Think how a change in one aggregate will affect each of the other aggregates. Create a table for each aggregate, listing all other aggregates beneath it. There are three options: “N/A”, “Immediately”, “Eventually”.
  3. Discuss the above table with your domain expert. Go through each combination. Be careful not to mark each combination with “Immediately”. Think about how something might work if the business was using a pen and paper system – not all invariants could be satisfied immediately.
  4. Aggregates that require immediate consistency should be merged. Others can remain apart.
  5. Those that require eventual consistency will be updated using eventual consistency.

Design implementation details:

An example – small aggregates, one aggregate creates another using an application service

This is from Vaughn Vernon’s Effective Aggregate Design Pt II.

Untitled

public class Product ... {
     /* Acts as a factory */
     public BacklogItem planBacklogItem(String aSummary, 
                                        String aCategory,
                                        BacklogItemType aType, 
                                        StoryPoints aStoryPoints) {
         ...
     }
     public Release scheduleRelease(String aName, 
                                    String aDescription,
                                    Date aBegins, 
                                    Date anEnds) {
         ...
     }
     public Sprint scheduleSprint(String aName, String aGoals,
                                  Date aBegins, Date anEnds) {
          ...
     }
     ...
}
public class BacklogItem {
    public void CommitToSprint(sprintId) {
      ... maintain business invariant regarding backlog items and sprints.
      ... Raise new event (new BacklogItemCommitted());
    }
}
public class BacklogItemService...{
 ...
 @Transactional
 public void PlanBacklogItem(
       String tenantId, String productId,
       String summary, String category,
       String backlogItemType, String storyPoints) {
    Product product = productRepository.productOfId(new TenantId(aTenantId),
                                                    new ProductId(aProductId));
    BacklogItem plannedBacklogItem = 
                             product.planBacklogItem(aSummary,
                                 aCategory,
                                 BacklogItemType.valueOf(aBacklogItemType),
                                 StoryPoints.valueOf(aStoryPoints));
 backlogItemRepository.add(plannedBacklogItem);
 }
 ...
 public void CommitBacklogItemToSprint(
       String tenantId, String backlogItemId, 
       String productId, String sprintId) {
    BacklogItem backlogItem = 
                      backlogItemRepository.GetBacklogItem(backlogItemId);
    backlogItem.CommitToSprint(sprintId);
    backlogItemRepository.Save(backlogItem);
 }
}
  • Another example from Lev Gorodinski here: just one aggregate here, PurchaseOrder Aggregate.

Chapter 6 – Tactical design with Domain Events

  • Each Domain Event class should implement the same IDomainEvent interface. It should, at minimum, convey at which moment in time it occurred.
  • Type names should be a statement of a past occurrence (a verb in a past tense): BacklogItemCommitted, BacklogItemScheduled.
  • Events are caused by commands being called (i.e. method calls, like CreateProduct). Also, they can be caused by timers. Method arguments should become properties of the domain event.
  • Events are raised by the Domain Model.
  • Convey only basic data in the Domain event. If you cram too much data in there, consumers won’t know what is relevant and what actually happened.
  • Events are saved to the event store, part of the same transaction as the change they’re reporting about.
  • If you save multiple Domain Events in a certain order, they might not arrive to their consumers in that same order. Consumers must make sure they interpret events in the same order they we’re issued: causality must be preserved. This is done by using some sort of an identifier that orders the events.
  • Don’t forget, messaging mechanisms should implement At-Least-Once delivery. Remember consumer idempotency.

Event Sourcing

  • Keeping all events so we can restore an objects state. Think of event sourcing as keeping records of everything that happened to an object.
  • Event store is append only, so it is fast.
  • Allows you to analyze data later on, even if you are not sure what to do with it while you are gathering it: usage statistics, habits, etc…
  • Performance related terms: “caching”, “snapshots” (pp. 109)

Chapter 7 – Acceleration and Management tools

Event Storming

  • A technique helping you to determine how to model your domain. It consists of a series of steps, leading to aggregates, domain events and bounded contexts.
  • Gather your business domain experts and developers in one room and go through the process of event storming. Follow the “two-pizza rule” – number of people present at the meeting should be such they can be fed by two pizzas. In essence it means keeping the number of attendees under eight.
  • The main point of Event Storming is to model process, not data. Therefore, the emphasis is on events that describe the domain. After that you determine which commands are responsible for causing the events, and only then do you associate commands with aggregates. This sort of approach clearly puts the emphasis on how the business views the activity being modelled. Do not fall into the trap of modelling entities first!
  1. Create a series of Domain Events using sticky notes (orange sticky notes). This emphasizes business process, as opposed to data and its structure. One sticky note – one DE. An event should be a past-tense verb (BacklogItemCommited). Place notes in time order, from left to right. Parallel events go above each other. Mark all trouble spots with a purple/red sticky note, with text describing the problem. If an event triggers a process, create the process on a lilac sticky note, with an arrow going from event to the process. If an event is outside your Core Domain, do not go into great lengths to model it.
  2. Create the Commands that trigger these events (light blue sticky notes). It is a verb in present tense (CommitBacklogItem). Create pairs of commands/events, put the sticky to the left of the associated event(s), but be aware that some events might not have a command associated (time-triggered events). It is also possible a single command causes multiple events. Write down roles (if important) performing the commands (yellow sticky in the corner of the command sticky). If a command triggers a process, create the process on a lilac note, with an arrow going from command to the process.
  3. Create an Entity/Aggregate associated with the executed Command (yellow sticky notes). It is a noun (BacklogItem). The aggregate is where the command will be located. Put the sticky underneath the Domain Event and Command stickies. If you find the same aggregate is reused multiple times along your timeline of events, create a separate sticky each time.
  4. Draw boundaries around groups of aggregates. These are your Bounded Contexts. Mark them with a pink sticky. Write their name down on the pink sticky note. The boundary will most likely be drawn on a departmental boundary, on a conflicting definition of the same concept or on a concept that is important but is not part of the Core Domain (print forms spring to mind here). These boundaries will have domain events going from one side of the boundary to another, so you draw arrows showing how the domain events flow – this models how some events arrive into a Bounded Context without being initiated by a Domain Event from within that Bounded Context.
  5. (Optional) Identify views needed by users to carry out their actions (green sticky notes). You can show only the views you deem are of relevance, if you want. You can even draw a quick tiny mockup if you think it adds value.

Managing DDD on an Agile project

  • Write down various elements of your DDD model. Add different dimension to each element (small, medium, large) and assign time estimation to each. After every sprint, during sprint retrospective, adjust your estimation. You can express time as either hours or points.
  • When you are starting to implement a feature, break it down into DDD elements from the table above, sum up the times and you have your estimation for the feature.
Small Medium Large
Event 0.5 1.2 2
Command 0.6 0.8 1.1
Aggregate/Entity 0.1 0.2 0.4
Views 0.5 0.6 0.7
… whatever else you feel is important as an element

 

WCF and posting JSON to REST API

Recently I had to post a JSON to my REST API and for some reason my data was being deserialized as null.

The reason was the JSON element’s name was in CamelCase, while the API parameter name was in pascalCase. While investigating this issue I came upon another interesting tidbit, so I wanted to share. As you know, API interface can be marked with an attribute [WebInvoke] which can be constructed using a parameter BodyStyle. The values for the parameter I’ve played around are:

  • No parameter supplied:
    • You must post the request without any element around the JSON structure:
[{
  "UserLevel": 2,
  "Users": []
  ,
  "Description": "someText"
  }, 
{
  "ConsentLevel": 2,
  "Users": []
  ,
  "Description": "someText"
}]
  • BodyStyle=WebMessageBodyStyle.Wrapped:
    • You must post the request with an element surrounding the JSON structure. Pay attention to the leading element consentGroups. This is the approach I decided upon, as it provides more flexibility (your API method can have multiple parameters this way):
      {
        "consentGroups":[{
             "UserLevel": 2,
             "Users": []
             ,
             "Description": "someText"
            }, {
             "UserLevel": 2,
             "Users": []
             ,
             "Description": "someText"
           }]
      }

Algorithms and Data Structures

Stack via Linked lists

  • In Java
    • size: ~40N
    • cost:
      Init 1
      Push 1
      Pop 1
      Size 1

Stack via resizeable array (doubling)

  • In Java
    • size: ~8N – ~32N
    • cost:
      Best Amortized* Worst
      Init 1 1 1
      Push 1 1 N
      Pop 1 1 N
      Size 1 1 1

* amortized means that over a number of inserts and resizes, the cost remains constant. It is an average time over a number of operations. It is true that every resize means we have to copy the array – this incurs O(N) cost. However the cost of resizing simply doubles the cost all the previous additions, thus over the entire period of time leads to a mere doubling of the cost, which according to the algorithm analysis principles means the cost is still constant.

  • Array doubling:
    • Everytime the array capacity is reached, we copy the array contents into array with size*2.
    • As the array gets emptied (due to pops), we do not downsize once it reaches half the size – such an approach leads to thrashing if push()-pop()-push()-pop() sequence would follow. To mitigate, we downsize once count reaches quarter of array size.

Linked list vs array implementations

  • You should avoid using linked list implementations of any data structure (not just stack) because:
    • They take up too much space.
    • Do not provide for O(1) lookup – entire structure must be traversed to get to an element at a specific index.
  • When compared to array implementations, the benefit is that they are more predictable, due to lack of resizing. Nevertheless, arrays amortized cost is lower than linked list cost, due to less reference fiddling.
  • Other sources also state that arrays provide for better locality of reference (since array contents are next to each other) and thus better utilization of cache.
  • Downside with array implementations is that on occassion, when resizing happens, cost skyrockets to O(N), so they might be tricky for time-sensitive operations.