EDI, RPC, SOAP, MQ, REST and Interoperability

All of these concepts help to address the same concern: how do we move data from System A to System B when these systems have no direct linkage (no common data store)? The following are a few of the technologies that have served as answers to this question.


There was a different kind of web back in the way day


EDI (Electronic Data Interchange): An exchange of data usually large in volume in comparison to other remote data transfer methods (batched records of 1000s vs. 1 record of JSON or 1 row of an RDBMS table), and usually done in conjunction with some kind of an ETL and/or Data Warehousing process. EDI is typically used for large, domain-specific transactions and the data transfer itself is performed over SFTP or another secure file transfer protocol and utilizes XSLT for data formatting. EDI files must adhere to strict ISO formatting specifications. This is helpful (and coincidentally adds a layer of complexity for hackers) when trying to ensure that a large number of disparate parties reporting data are all sending data in the right format as, if an EDI file's data format is wrong in any way, it won't be accepted at the destination.


This is an example of an EDI "EDIFACT" formatted file


RPC (Remote Procedure Call): Highly-coupled abstraction (if you can call it abstraction- it's really more of a video game accessory that only works on certain consoles) that essentially requires the client and server to be running the same program which, while once upon a time was feasible (and in some cases may be desirable for channel security), is not typically the ideal way to communicate openly. However, for closed, secure communications, RPC is still very much a part of the many technologies that facilitate secure messaging in applications like Telegram, Signal and the like.


As stated, RPC implies client-server sharing code (see "RPC thread" spanning above)


SOAP (Simple Object Access Protocol): This had been the standard for web services (indeed it is why Microsoft created WCF) until HTTP-based/RESTful APIs replaced them as the standard choice among developers of newer projects around the early 2010's. It is self-describing (.wsdl) and allows for communication over virtually any point-to-point communications protocol. SOAP is however quite prescriptive in the way it dictates how SOAP message "objects" are defined, leading to a lot of (interface) metadata inside the envelope that may have little to do with the task at hand but which is needed so that the client can understand the message and deserialize the object if necessary.


An example of a faulting SOAP call's SOAP response


MQ (Message Queuing): the primary concept of utilizing message queues and exchanges is the asynchronous nature in which the messages are pulled and pushed vs. a REST or SOAP service call which are request/response synchronous by design.

This architectural data model also supports highly-decoupled design whereby many applications, all written in different languages and under disparate frameworks can utilize the same MQ Exchange and share communication across queues.

Frameworks like RabbitMQ facilitate event sourcing design with queues; an app is often both a Producer and Consumer



REST (REpresentational State Transfer) APIs: Operating completely (and solely) over HTTP(S) and via (primarily) GET/POST actions which have already undergone some 35 years of incremental improvement, for as long as the web lives on, REST APIs will be at its foundation. They aren't self-describing though descriptive metadata can be embedded in the naming of the API resources to achieve similar reflection. Additionally, there are usually descriptive, interactive specifications for large publicly hosted APIs like the ones from Google Maps and Twitter. RESTful APIs are not highly prescriptive in the API structure/operations. It just has to be an HTTP action method that any HTTP client would understand. Most APIs default to passing JSON around when objects are involved in POST arguments or GET return values; but there is no reason you cannot return XML. Or a file. Or a streaming video. Or whatever floats your software ship. People create RESTful API wrappers for SOAP services all the time.



Just leaving the REST for last.. 😉


Although many in the software development community prefer the use of RESTful APIs, Message Queuing or some combination of the 2 for new projects, we must be mindful of the fairly recent past which has littered the landscape with SOAP, EDI, ETL and an assortment of proprietary and highly customized RPC (still active) communication channels (for example SOAP streaming over UDP).

There was a time before the web as we know it today when machines like ATMs and TicketMaster were still interconnected just as ever. However these connections were not regular TCP HTTP packets traveling to and from port 80 or 443 but rather fixed length TCP frames of ATM or another early file transfer protocol. And many of those ATM and TicketMaster connections still exist, even if upgraded for modern times via something like WCF (.NET) or JAX-WS (Java).

There are certain things only SOAP can do. There are certain legacy systems which will not be updated any time soon (because "if it ain't broke") that still need to interface with SOAP clients. As technologists we have to deal with this and understand the tradeoffs of using different frameworks for different jobs. 

In the same way that there is no perfect language for every scenario, no one way of electronically transferring data and interacting with remote systems is always always the "best way" (although REST APIs come pretty close as so much our connected world is now http-based).

The best choice for sending remote communications just like any choice of framework, language or design paradigm is never fixed. The answer requires careful, domain-centric, thorough analysis of the problem and the resources available to resolve that problem. In software development, the answer to "which way is the best way?" is invariably- "it depends".





"Next Big" Software Religiosity and The Go-nowhere Rush

There is far too much religious extremism in information technology these days. And there have always been camps (extreme anti-Microsoft sentiment or its sad corporate counterpart: disdain, fear and suspicion of all things open-source)- but these days it has gotten to the point where sensible, cheap, reliable, proven solutions that everyone on the team understands- are thrown out in favor of chasing the next big thing that some bigshot at some big conference declared was going to be the next, next, next "big thing".


This image does have its merits..


Amid all the continuous rush to be cutting edge despite understanding what that edge can do for you and having a strong data foundation to build upon with that new cutting edge thing- it doesn't matter what tools are out. You are still stuck with ideas and not programs.

Design and develop with what works for your particular team and project and within the context of the environments of your stakeholders (if all but 2% of your customers use Android then the iPhone version of your app may not be as important as you think). Above all else, make sure you understand the domain knowledge behind the data your application will be persisting and passing around. That (the data understanding) is the heart of every program that stores, processes, transmits or even simply reads/prints/paints- any kind of communication.

Data sense-making and software development is hard work. And it's not done in a void. I suggest reading Stephen Few's "Big Data, Big Dupe" which is a little paperback containing 90 some pages of important wisdom for this modern rapid-fire information age that pre-empts knowledge of data in favor of slogans and metrics about data.

In short, the essence of this book is that if you have say 10TB of crap data that is always causing ETL failures that your personnel spend countless hours trying to correct... you may indeed have "big data" per some misguided tech journalist's definition... but you still have crap data-- understand your data before you try understanding how best to fit it inside of the newest shiny box.

Take also for example message queues and their usage in modern web application development. There seems to be a lot of misunderstanding about what MQ is and even some who claim this is a new technology (MSMQ has been around since Windows '95; IBM MQ has been in use since 1993). Basic email has operated on a publisher/subscriber (ICMP or SMTP) messaging queue paradigm that works in much the same way as modern MQ implementations (minus some bells and whistles)- since the early 70's.

These things aren't as complicated as they seem but they are complicated. And it's perilous to keep jumping from new trick to new trick whilst ignoring foundational, timeless software principles.

I would go so far as to say it is injurious to current and future generations of software developers to keep focusing on buzzwords, zooming out and away from the hard-but-necessary work of understanding the data, and then wondering why the tool or framework flavor of the year did not save the day.

Getting Familiar with Microsoft Azure

I'd like to summarize what I've learned in the past couple years of using MS Azure for personal and professional software development. Keep in mind this is coming from the perspective of a developer; Azure can be used for many interesting things outside the scope of just deploying, hosting, and scaling software in the cloud.


The Azure Portal UI is intuitive, constantly being updated (for the better), and contains tools to create and configure nearly anything you can imagine  


First, it's a bit of a maze.

Then it's amazing.


Starting out

The idea of any cloud provider is to enable IaaS, SaaS and PaaS among other XaaS's. Instead of having to provision physical machines, network equipment and associated hardware, and go out to dozens of different vendors to manage service agreements for the various services a company uses- nowadays a company can move most of that distributed mess into their own private cloud and just manage everything in one place.

And that one place is highly secure, geo-redundant and hosted on some of the best and newest hardware available.





Azure resources

In Azure you have the concept of resources which consume resource units. Anything can be a resource: a network card, a virtual machine, a firewall security policy- they are all resources in the world of Azure. You can create, modify and delete resources virtually at will- or on a schedule through automation scripts that operate on what are known as ARM (Azure Resource Manager) templates which are basically representations of Azure resources in the form of JSON.

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "adminUsername": {
      "type": "string",
      "metadata": {
        "description": "Username for the Virtual Machine."
      }
    },
    "adminPassword": {
      "type": "securestring",
      "minLength": 12,
      "metadata": {
        "description": "Password for the Virtual Machine."
      }
    },
    "dnsLabelPrefix": {
      "type": "string",
      "defaultValue": "[toLower(concat(parameters('vmName'),'-', uniqueString(resourceGroup().id, parameters('vmName'))))]",
      "metadata": {
        "description": "Unique DNS Name for the Public IP used to access the Virtual Machine."
      }
    },
    "publicIpName": {
      "type": "string",
      "defaultValue": "myPublicIP",
      "metadata": {
        "description": "Name for the Public IP used to access the Virtual Machine."
      }
    },
    "publicIPAllocationMethod": {
      "type": "string",
      "defaultValue": "Dynamic",
      "allowedValues": [
        "Dynamic",
        "Static"
      ],
      "metadata": {
        "description": "Allocation method for the Public IP used to access the Virtual Machine."
      }
    },
    "publicIpSku": {
      "type": "string",
      "defaultValue": "Basic",
      "allowedValues": [
        "Basic",
        "Standard"
      ],
      "metadata": {
        "description": "SKU for the Public IP used to access the Virtual Machine."
      }
    },

    "OSVersion": {
      "type": "string",
      "defaultValue": "2019-Datacenter",
      "allowedValues": [
        "2008-R2-SP1",
        "2012-Datacenter",
        "2012-R2-Datacenter",
        "2016-Nano-Server",
        "2016-Datacenter-with-Containers",
        "2016-Datacenter",
        "2019-Datacenter",
        "2019-Datacenter-Core",
        "2019-Datacenter-Core-smalldisk",
        "2019-Datacenter-Core-with-Containers",
        "2019-Datacenter-Core-with-Containers-smalldisk",
        "2019-Datacenter-smalldisk",
        "2019-Datacenter-with-Containers",
        "2019-Datacenter-with-Containers-smalldisk"
      ],
      "metadata": {
        "description": "The Windows version for the VM. This will pick a fully patched image of this given Windows version."
      }
    },
    "vmSize": {
      "type": "string",
      "defaultValue": "Standard_D2_v3",
      "metadata": {
        "description": "Size of the virtual machine."
      }
    },
    "location": {
      "type": "string",
      "defaultValue": "[resourceGroup().location]",
      "metadata": {
        "description": "Location for all resources."
      }
    },
    "vmName": {
      "type": "string",
      "defaultValue": "simple-vm",
      "metadata": {
        "description": "Location for all resources."
      }
    }
  },
  "variables": {
    "storageAccountName": "[concat('bootdiags', uniquestring(resourceGroup().id))]",
    "nicName": "myVMNic",
    "addressPrefix": "10.0.0.0/16",
    "subnetName": "Subnet",
    "subnetPrefix": "10.0.0.0/24",
    "virtualNetworkName": "MyVNET",
    "subnetRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', variables('virtualNetworkName'), variables('subnetName'))]",
    "networkSecurityGroupName": "default-NSG"
  },
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2019-06-01",
      "name": "[variables('storageAccountName')]",
      "location": "[parameters('location')]",
      "sku": {
        "name": "Standard_LRS"
      },
      "kind": "Storage",
      "properties": {}
    },
    {
      "type": "Microsoft.Network/publicIPAddresses",
      "apiVersion": "2020-06-01",
      "name": "[parameters('publicIPName')]",
      "location": "[parameters('location')]",
      "sku": {
        "name": "[parameters('publicIpSku')]"
      },
      "properties": {
        "publicIPAllocationMethod": "[parameters('publicIPAllocationMethod')]",
        "dnsSettings": {
          "domainNameLabel": "[parameters('dnsLabelPrefix')]"
        }
      }
    },
    {
      "type": "Microsoft.Network/networkSecurityGroups",
      "apiVersion": "2020-06-01",
      "name": "[variables('networkSecurityGroupName')]",
      "location": "[parameters('location')]",
      "properties": {
        "securityRules": [
          {
            "name": "default-allow-3389",
            "properties": {
              "priority": 1000,
              "access": "Allow",
              "direction": "Inbound",
              "destinationPortRange": "3389",
              "protocol": "Tcp",
              "sourcePortRange": "*",
              "sourceAddressPrefix": "*",
              "destinationAddressPrefix": "*"
            }
          }
        ]
      }
    },
    {
      "type": "Microsoft.Network/virtualNetworks",
      "apiVersion": "2020-06-01",
      "name": "[variables('virtualNetworkName')]",
      "location": "[parameters('location')]",
      "dependsOn": [
        "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
      ],
      "properties": {
        "addressSpace": {
          "addressPrefixes": [
            "[variables('addressPrefix')]"
          ]
        },
        "subnets": [
          {
            "name": "[variables('subnetName')]",
            "properties": {
              "addressPrefix": "[variables('subnetPrefix')]",
              "networkSecurityGroup": {
                "id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
              }
            }
          }
        ]
      }
    },
    {
      "type": "Microsoft.Network/networkInterfaces",
      "apiVersion": "2020-06-01",
      "name": "[variables('nicName')]",
      "location": "[parameters('location')]",
      "dependsOn": [
        "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPName'))]",
        "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]"
      ],
      "properties": {
        "ipConfigurations": [
          {
            "name": "ipconfig1",
            "properties": {
              "privateIPAllocationMethod": "Dynamic",
              "publicIPAddress": {
                "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPName'))]"
              },
              "subnet": {
                "id": "[variables('subnetRef')]"
              }
            }
          }
        ]
      }
    },
    {
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2018-10-01",
      "name": "[parameters('vmName')]",
      "location": "[parameters('location')]",
      "dependsOn": [
        "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
        "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
      ],
      "properties": {
        "hardwareProfile": {
          "vmSize": "[parameters('vmSize')]"
        },
        "osProfile": {
          "computerName": "[parameters('vmName')]",
          "adminUsername": "[parameters('adminUsername')]",
          "adminPassword": "[parameters('adminPassword')]"
        },
        "storageProfile": {
          "imageReference": {
            "publisher": "MicrosoftWindowsServer",
            "offer": "WindowsServer",
            "sku": "[parameters('OSVersion')]",
            "version": "latest"
          },
          "osDisk": {
            "createOption": "FromImage",
            "managedDisk": {
              "storageAccountType": "StandardSSD_LRS"
            }
          },
          "dataDisks": [
            {
              "diskSizeGB": 1023,
              "lun": 0,
              "createOption": "Empty"
            }
          ]
        },
        "networkProfile": {
          "networkInterfaces": [
            {
              "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
            }
          ]
        },
        "diagnosticsProfile": {
          "bootDiagnostics": {
            "enabled": true,
            "storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))).primaryEndpoints.blob]"
          }
        }
      }
    }
  ],
  "outputs": {
    "hostname": {
      "type": "string",
      "value": "[reference(parameters('publicIPName')).dnsSettings.fqdn]"
    }
  }
}
An example of an ARM template- this one is for deploying/updating a Windows Server VM resource


For a trial period of (currently 12 months) most all of the really useful stuff is free (be careful not to accidentally deploy Azure Co$mos though...😳 ...that is not free, and that is not cheap). After the trial period, the cost was still relatively cheap for the services that I use most in Azure (an App Service hosting a handful of .NET Core apps with SSL, 1 powerful virtual machine, a DNS zone, a vNet)- all for about $30/month.


Development

For development, much like it is with Git, the Visual Studio integration with Azure is pretty seamless and enables deployments directly from the IDE. You can also enable an Azure object explorer to view your Azure cloud instance' resources within VS.

Most all established companies are going to want to- for security reasons- (or will have to for incompatibility reasons)- keep at least some legacy software and/or infrastructure on-prem.


Azure ARC connects your On-Prem to your Cloud



And that is why there is Azure ARC- an incredibly simple way to bridge cloud and on-prem resources to create a hybrid virtual network. ARC is essentially a service that you run on your on-prem machines that connects them to your Azure subscription where the machines can then be configured as if they were an Azure resource and can enable on-prem devices to communicate with cloud resources.

"Arc works by running an agent on your non-azure resources; this is a service on VM's and a Kubernetes pod on Kubernetes cluster. Once you install this service, the machine registers with Azure and is ready for management." -samcogan.com

Additionally, virtual machines (Windows OSs or approved Linux distros) can be accessed via SSH or RDP and are as amazingly fast or as tortuously slow as you configure them to be. You can choose from preconfigured database server or application server templates or build your virtual machines completely à la carte.

The ARM template paradigm is easy to understand and develop with, and there are 2 CLI options- Azure CLI and the new PowerShell "Az" module.


How to see "gains"

To see savings from using the cloud, instead of purchasing a new server or physical license, you can rent the computing power you need to power your apps and services and you can even move your worker machines onto the cloud where they can be more easily managed (we are indeed moving back to a thin client/dumb terminal world).

You can move from the physical Exchange mail server model to Outlook365. You can move all of your physical Office subscriptions to Office365.

If your computing needs are seasonal or time-sensitive, you can scale up when needed and pay a high price for short bursts of computing power, while scaling back down to a much lower-budget level until the next scale-up need arises. The configuration of the usage of resources in Azure is highly granular and lends itself to squeezing out a lot of efficiency for those who can monitor and manage it correctly in accordance with organizational needs.

Azure Hybrid Benefit also provides credits for customers who already have an on-prem SQL Server software license. Want to see that MSSQL2019 Enterprise for Linux instance in the cloud? 🙂


Monitor usage

Monitor your cloud resource usage as you can inadvertently requisition a resource that behaves in ways you did not expect and in turn end up ringing up a lot of expensive RUs (resource units). Azure allows you to configure a budget and alerts when you have reached certain thresholds toward or beyond the budget number so that you can configure an alert which will email you a warning message if you have reached 105% of your monthly budget, for example.


The current Azure offerings are plentiful and powerful enough to outfit even the most complex IT infrastructure

Azure, like any cloud provider, forces you to take a fine-grain look at every single resource you are using. It is amazing how much stuff we don't actually use.

It is only when you begin to pay for usage of each resource and see the numbers rising daily do you really understand how much you are utilizing your various resources.

Powerful computing machines began as a timeshare because of the realization that it is madness to let expensive machines sit idle. And though the resources to share and provision among users has become far more complex, we are returning to that model.


Conclusion

Whether you use Azure to explore different kinds of technology or to implement an IT infrastructure completely in the cloud to connect and supercharge your applications and/or workforce- the tech is now there and the costs are comparable to AWS.

My two criticisms of Azure are that (1) Azure seems to excessively spotlight/push certain features forward and other features (many practical, free and cheap things you would think are "essential"- like setting up DNS zones) remain sort of in the shadows/awaiting help links waiting to be discovered... And (2), sadly other things, little things like logging analytic insights that you would think are free are in fact Azure resources that charge RUs. 😕

These aspects suck but are tolerable in light of all of the awesome functionality Azure provides.

Microsoft continues to improve an industry-leading cloud platform that executives, management, engineers, developers, and system admins alike can all learn to love. 💖

PowerShell Commands

The origins of PowerShell lie in the Monad project which you can learn about here: https://www.jsnover.com/Docs/MonadManifesto.pdf


PowerShell Base CmdLets and associated Pipeline parsing CmdLets  provide powerful Windows administration tools



These can be used on-the-fly to glean and share information about a problem or the state of your machine(s) and network or they can be crafted into useful scripts that run on a schedule to report on the status of applications and services, run backup and ETL tasks as well as myriad other (often critical) scheduled jobs that happen routinely behind the scenes to keep IT operations organized and running.
You may for instance, want to have a script run every few hours that gathers statistics about throughput and storage and then alert admin users if a certain threshold is exceeded. Or, in Azure, you may want to utilize a scripted template (like Azure ARM and associated CLI commands) to configure new Azure resources and their environments.


Useful cmdlets:

#check security level
Get-ExecutionPolicy

#elevate security access level
Set-ExecutionPolicy Unrestricted

#get information on any service
Get-Service -Name PhoneSvc

#get the same log info seen in eventvwr
Get-EventLog -Log "Application" 

#get process information
Get-Process -ComputerName MYCOMPUTER

#stop process like cmd.exe kill
Stop-Process -Name “notepad”

#get drive information of the drives connected to the current PS session
Get-PSDrive 

#get information on any powershell cmdlt, function or module
Get-Help -Name Streaming

#get all the installed commands
Get-Command

#connect to your azure account with the "az" azure cmdlt Connect-AzureRmAccount #upload blob content to storage Set-AzStorageBlobContent -File "D:\_TestImages\Image002.png" ` -Container $containerName ` -Blob "Image002.png" ` -Context $ctx -StandardBlobTier Cool #download blob content from storage Get-AzStorageBlobContent -Blob "Image002.png" ` -Container $containerName ` -Destination "D:\_TestImages\Downloads\" ` -Context $ctx

#stop a sql server instance
Stop-SqlInstance -ServerInstance MSSQL01

#clear screen
Clear-Host

#ping
Test-NetConnection

#telnet
Test-NetConnection -Port

#tracert
Test-NetConnection -TraceRoute

#ipconfig
Get-NetIPAddress

#nslookup
Resolve-DnsName -Name "Hostname"

#netstat
Get-NetTCPConnection

#flushdns
Clear-DnsClientCache

#ip release/renew
Invoke-Command -ComputerName -ScriptBlock {ipconfig /release} Invoke-Command -ComputerName -ScriptBlock {ipconfig /renew}

#disable/enable network card
Invoke-Command -ComputerName -ScriptBlock {ipconfig /release} Invoke-Command -ComputerName -ScriptBlock {ipconfig /renew}



Additionally, it is often useful to implement piping of command output, especially in CI/CD toolchain scripting where scripts feed their output (which become the next script's argument(s)) to the next script in the chain.


For example: 

#export a list to .csv file
Get-Service | Export-CSV c:\20200912_ServiceSnapshot.csv

#be more selective with Select-Object module and pipe that to the csv
Get-Service | Select-Object Name, Status | Export-CSV c:\20200912_ServiceStatusSnapshot.csv

#get event information and pipe method (of each log event) info to the console
Get-EventLog -log system | gm -MemberType Methods

#get a process and stop it
Get-Process notepad | Stop-Process

#delete all files matching some Regex pattern
Get-ChildItem $Path | Where{$_.Name -Match "someFileName.txt"} | Remove-Item



References:  



Useful Calculators

The following are links to some very useful calculators and en/decoding tools that can help you do anything from binary encoding/decoding, encryption/decryption to identifying supernet and subnet (the components an IP's network and host portions) information by IP address and number of masking bits applied.



  • https://www.devglan.com/online-tools/aes-encryption-decryption
  • http://www.unit-conversion.info/texttools/ascii/
  • http://www.unit-conversion.info/texttools/convert-text-to-binary/#data
  • https://www.calculator.net/binary-calculator.html
  • https://www.calculator.net/ip-subnet-calculator.html
  • https://www.calculator.net/standard-deviation-calculator.html
  • https://onlinehextools.com/xor-hex-numbers
  • https://www.calculator.net/random-number-generator.html



    Programming Language Origins and Paradigms

    The following charts (1) outline the origins of some of the most well known languages from the outset of computing up to 2001 and (2) illustrate the primary motivations and programmatic structure of several languages.


    A brief history of computing languages up to 2001


    Many languages, many different ways of creating software suited for various purposes

    Small Multiples (are awesome)

    To keep it short and sweet let's go with the definition:

    "A small multiple (sometimes called trellis chart, lattice chart, grid chart, or panel chart) is a series of similar graphs or charts using the same scale and axes, allowing them to be easily compared. It uses multiple views to show different partitions of a dataset."

    Read any serious visual communication guide and it will invariably highlight this powerful tool we have at our disposal when we have the data (we almost always have the data).

    A pair of Small Multiples example quite pertinent to the current times followed by some other good ones:







    This CNN.com graphic captures a running snapshot of the "new case/spread" curve trajectory of individual states



    This clearly communicates how each state unemployment picture fared from 1976-2009



    This SM visual shows population change over time by country (look at Mexico's growth since 1960)




    Locations - Google Maps API, ASP.NET Core and SQL Server

    This app's function/purpose is to use Google Maps API to get geographic data and render locations on maps with editable pins (much like... many apps these days- it is kind of becoming an expectation for any application/service involving a location street address).

    In this way you can record or plan the state(s) of an event or location at some particular street address. Or just have a geographic representation of some important locations that you can then print and have a custom map for.


    This is a proof-of-concept app illustrating what you can do with a little JavaScript, a web app and the Google Maps API



    The code below takes locations records (containing the lat/long of the geographic coordinate) from a database and then initializes the Google Map with some options (I omitted many for brevity). The main interesting thing the code does below, is when it renders the pins (addMarker() function) it adds an event listener to delegate the task of popping up an ASP.NET Core-bound edit modal when a user clicks the pin.

    On the Add and Update side as far as mapping Lat/Long from Street, City, State- that is all handled by the incredibly useful GoogleLocationService provided as a Nuget package for .NET Core apps.

    Other than that it is just standard JavaScript- Google Maps API does virtually all of the geocoding and map visualization heavy lifting.


    The crux of the utilization of the API code (callback and map rendering) is this:
     <script>  
         function initMap() {  
           var map = new google.maps.Map(  
             document.getElementById('map'),  
             {  
               center: new google.maps.LatLng(@Model.CenterLat, @Model.CenterLong),  
               zoom: 8  
             }  
           );  
           var pins = @Html.Raw(Json.Serialize(@Model.Locations));  
           for (var i = 0; i < pins.length; i++) {  
             var myLatLng = {  
               lat: pins[i].lat,  
               lng: pins[i].long  
             };  
             addMarker(myLatLng, map, pins[i]);  
           }  
         }  
         function addMarkerAsync(location, map) {  
           new google.maps.Marker({  
             position: location,  
             title: 'Home Center',  
           });  
           marker.setMap(map);  
         }  
         function addMarker(location, map, pin) {  
           var marker = new google.maps.Marker({  
             position: location,  
             title: '...something dyanmic...',  
           });  
           var infowindow = new google.maps.InfoWindow({  
             content: ''  
           });  
           function AsyncDisplayString() {  
             $.ajax({  
               type: 'GET',  
               url: '/Home/GetLocationModalInfo',  
               dataType: "HTML",  
               contentType: 'application/json',  
               traditional: true,  
               data: pin,  
               success: function (result) {  
                 debugger;  
                 infowindow.setContent('<div style="background-color:#000000;">' + result + '</div>');  
                 infowindow.open(map, marker);  
               },  
               error: function (arg) {  
                 alert('Error');  
               }  
             });  
           }  
           google.maps.event.addListener(marker, 'click', function () {  
             AsyncDisplayString(map, marker)  
           });  
           marker.setMap(map);  
         }  
       </script>  
    


    And then this Controller Action that uses GoogleLocationService to get coordinates by address:
     [HttpPost]  
         public IActionResult AddLocation(LocationModel location)  
         {  
           string address = location.StreetAddress1.Replace(" ", "+") + "," + location.City.Replace(" ", "+") + "," + location.State.Replace(" ", "+");  
           MapPoint coords = _locationService.GetLatLongFromAddress(address);  
           location.Lat = (decimal)coords.Latitude;  
           location.Long = (decimal)coords.Longitude;  
           using (var db = new SqlConnection(_configuration.GetConnectionString("DefaultConnection")))  
           {  
             db.Open();  
             string sql = @"INSERT INTO [Locations].[dbo].[Locations] ([Name], [Contact], [Email], [Website], [Phone], [StreetAddress1], [StreetAddress2], [City]"  
               + ",[State], [Zip], [LocationContact], [PrimaryContact], [Notes], [Type], [Lat], [Long], [Petitions], [Flyers], [Posters], [LastPickUpDateTime], [LastOutOfStockDateTime], LastDropoffDateTime"  
               + ",[AllTimeOutofStock],[Unsupportive],[VolunteerInterest])"  
               + " VALUES ('" + location.Name + "','" + location.Contact + "','" + location.Email + "','" + location.Website + "','" + location.Phone + "','" + location.StreetAddress1 + "','" + location.StreetAddress1 + "','" + location.City + "'"  
               + ",'" + location.State + "','" + location.Zip + "', -1, -1,'" + location.Notes + "', 1, " + location.Lat + "," + location.Long + "," + location.Petitions + "," + location.Flyers + "," + location.Posters + ",'" + location.LastPickUpDateTime + "','" + location.LastOutOfStockDateTime + "','" + location.LastDropoffDateTime + "', 0, 0, 1) " + ";";  
             db.Execute(sql);  
           }  
           var model = GetDefaultMapView();  
           model.KeyString = _configuration["MapsAPIKey"].ToString();  
           return View("Map", model);  
         }  
    


    This is a proof-of-concept app illustrating what you can do with a little JavaScript, a web app and the Google Maps API


    As you can see the Google Maps API provides a lot of opportunity for your application- don't underestimate the power of location-based data. With the tools at our disposal today the functionality of applications is being limited less by available algorithms/frameworks/tools- but rather, our imagination.


    I strongly suggest you look into the ways you can integrate geographic/mapped data with Google Maps API; very powerful API







    ChartJS for Data Vizualiizations

    I came across ChartJS about 2 years ago while debugging code from another, similar data visualization technology inside an AngularJS app.


    ChartJS is a flexible JavaScript data visualization framework which allows for some pretty powerful integrations and customizations


    The concept is you have a "<canvas>" DOM element, which you transform into a ChartJS chart via some JavaScript initialization. After that your tasks are simply finding the data you want to render and deciding the options of exactly how you want the chart visual to appear.

    You can do some really neat and dynamic stuff with ChartJS.

    I have used a lot of charting frameworks, and it does not get more flexible or simple than this:
     <html>  
     <head>  
     <script src="https://cdn.jsdelivr.net/npm/chart.js@2.8.0"></script>  
     </head>  
     <body>  
     <div>  
     <canvas id="myChart" style='background-color:darkgray; width:100%; height:100%;'></canvas>  
     </div>  
     <script>  
     var ctx = document.getElementById('myChart').getContext('2d');;  
     var chart = new Chart(ctx, {  
       type: 'line',  
       data: {  
         labels: ['16_Qtr1', '16_Qtr2', '16_Qtr3', '16_Qtr4', '17_Qtr1', '17_Qtr2', '17_Qtr3', '17_Qtr4', '18_Qtr1', '18_Qtr2', '18_Qtr3', '18_Qtr4', '19_Qtr1', '19_Qtr2', '19_Qtr3', '19_Qtr4', '20_Qtr1', '20_Qtr2', '20_tr3', '20_Qtr4', '21_Qtr1', '21_Qtr2', '21_Qtr3', '21_Qtr4','22_Qtr1', '22_Qtr2', '22_Qtr3', '22_Qtr4', '23_Qtr1', '23_Qtr2', '23_tr3', '23_Qtr4'],  
         datasets: [{  
           label: 'Some random quartley demo data..',  
           backgroundColor: 'black',  
           borderColor: 'lime',  
           data: [40.2, 72.88, 47.1, 22, 54.43, 52.18, 17.1, 52, 67.2, 54.88, 64.1, 78, 67.2, 55.88, 58.1, 57, 50.2, 52.88, 57.1, 62, 74.43, 62.18, 67.1, 72, 77.2, 74.88, 74.1, 78, 77.2, 75.88, 78.1, 77, 70.2, 72.88, 77.1, 62, 64.43, 62.18, 67.1, 72, 67.2, 54.88, 44.1, 28, 27.2, 25.88, 38.1, 37, 40.2, 42.88, 44.1, 52, 54.43, 52.18, 67.1, 82, 87.2, 84.88, 84.1, 88, 87.2, 95.88, 108.1, 127]  
         }]  
       },  
        "options": {  
           "legend": {"position": "bottom"}      
         }  
     });  
     </script>  
     </body>  
     </html>  
    


    Reference: https://www.chartjs.org/

    SSRS REST API v2

    Here is a response from the SSSR REST API in action.. (you can access a lot more SSRS item properties and customize at will once you know the API)


    The SSRS API v2 has far more functionality than v1, but they essentially work the same. You must be authenticated to the SSRS report server you are targeting (localhost in this case) to make web GET/POST requests to the API.

    Once auth'd you can push and pull any useful SSRS data pretty easily to make SSRS do some pretty cool things it can't do out of the box..


    This is the SSRS API as accessed through a web browser; simply give your .NET app an HttpClient and you can make use of all these responses; it's just JSON...



    You can get a collection of SSRS catalog items as in the example above (folders, reports, KPIs) by just specifying the action name, or you can select an individual item by putting the item GUID in parenthesis in the API request URL:


    You can access individual items in the API via GUID in parens after the API action name.




    Common Useful SSRS API v2 Actions:
    • Reports
    • Datasets
    • Data Sources
    • Folders
    • Schedules
    • Subscriptions
    • Comments
    • KPIs
    • CatalogItems (everything)



    Example of a .NET Standard library with an HttpService abstacting the SSRS API calls:
     namespace ExtRS  
     {  
       public class SSRSHttpService  
       {  
         const string ssrsApiURI = "https://localhost/reports/api/v2.0";  
         HttpClient client = new HttpClient(new HttpClientHandler() { UseDefaultCredentials = true });  
             public async Task<GenericItem> GetReportAsync(Guid id)  
         {  
           client.BaseAddress = new Uri(ssrsApiURI + string.Format("/reports({0})", id));  
           var response = await client.GetAsync(client.BaseAddress);  
           var odata = response.Content.ReadAsStringAsync().Result;  
           return JsonConvert.DeserializeObject<GenericItem>(odata);  
         }  
       }  
     }  
    This is verbose to better break down the steps of what is happening on the ExtRS service end




    A very basic class designed to demonstrate using SSRS API Response to create a .NET object:
     using Newtonsoft.Json;  
     using System.Collections.Generic;  
     namespace ExtRS  
     {  
       public class GenericItem  
       {  
         [JsonProperty("@odata.context")]  
         public string ODataContext { get; set; }  
         [JsonProperty("Id")]  
         public string Id { get; set; }  
         [JsonProperty("Name")]  
         public string Name { get; set; }  
         [JsonProperty("Path")]  
         public string Path { get; set; }  
       }  
     }  
    
    The power of the SSRS API is limited primarily your imagination- lots of customization can be made




    And finally, called from a Controller Action in an MVC app:
     using System;  
     using System.Web.Mvc;  
     using System.Threading.Tasks;  
     using ExtRS;  
     namespace Daylite.Controllers  
     {  
       public class ReportsController : Controller  
       {  
         public SSRSHttpService service = new SSRSHttpService();  
         public async Task<ViewResult> GetReportsAsync()  
         {  
           return View("Index", await service.GetReportsAsync());  
         }  
         public async Task<ViewResult> GetFoldersAsync()  
         {  
           APIGenericItemsResponse result = await service.GetFoldersAsync();  
           return View("Index", result);  
         }  
         public async Task<ViewResult> GetReportAsync(Guid id)  
         {  
           GenericItem result = await service.GetReportAsync(id);  
           return View("Index", result);  
         }  
       }  
     }  
    


    Reference: https://github.com/Microsoft/Reporting-Services/tree/master/APISamples


    Over-engineering in Software

    TL; DR; - build the app; not your imagined future state of the app

    Implementing what you know you will absolutely need before all else is the best course of action for development


    This is a great brief guide on how it happens and how to prevent or at least diminish the phenomena that is borne out of a desire to do things exact and right- but ignores that fact that some things never are evaluated and time is.... finite. (in (virtually) every project, we only have so much time to get the bells to ring and the whistles to whistle...

    That interface to allow for a potential future Python SDK, putting a small proprietary API on Swagger with loads of documentation and test calls just "because" might not be a road you want to go down yet..

    Creating a .NET Standard portable class library of the core functionality with a full suite of tests because you think it may be Nugetized in the future... is not something you should do until the app is already engineered and built well, out the door and humming along swimmingly!

    Over-engineering is essentially development of designs and/or writing code that solves problems that do not currently exist.


    BugZilla learned the hard way that "Future-Proofing" taking too literally can become over-engineering and waste lots of time for stuff never to be used/needed



    Bugzilla has dealt with it and a former developer left a very concise and telling quote from their experience:
    Some people think design means, “Write down exactly how you’re going to implement an entire project, from here until 2050.” That won’t work, brother. That design is too rigid–it doesn’t allow your requirements to change. And believe me, your requirements are going to change. 
    Planning is good. I should probably do more planning myself, when it comes to writing code. 
    But even if you don’t write out detailed plans, you’ll be fine as long as your changes are always small and easily extendable for that unknown future. 
    -Max


    TAKEAWAY:

    • "Business requirements never converge. They always diverge."
    • "At the beginning, it’s best to stick to the KISS principle over DRY."
    • Favor less abstraction for things that are currently 100% exactly known

    Long ago one of my software engineering colleagues at a large Milwaukee-based industrial company (thanks John Ignasiak!) taught me a way to use what is known as "soft coding" in relational SQL to use data abstraction tables in order to abstract data types until run-time and multi-purpose data types and relations whilst keeping all values in the same table.


    Just never enough generalizations and assumptions to make in software development... lol


    I am not a big fan of multi-purposing things. But sometimes it is the only way and in this case of a legacy system with an old reporting db, it was the only way for time we had available.

    Use the tools for the task they were made for.

    Use the data type (VARCHAR for instance) for the data it was made for, not for dynamic SQL and hard-coded but dynamically inferred data types. The latter usages are almost always symptomatic of a terrible, terrible database design or something that your design just cannot accommodate without major restructuring of relational hierarchy and dependencies.

    That all being said, SQL soft-coding (and dynamic SQL) is awesome and has some really powerful use cases where it is the perfect approach for the task at hand.

    Build the app by the requirements as they currently exist, not by your imagination of how the future may or may not affect the app.

    Two principals that have helped many a developer over the years are the acronyms:


    • KISS (Keep It Simple)
    • YAGNI (You Ain’t Gonna Need It)



    Reference: https://www.codesimplicity.com/post/designing-too-far-into-the-future/

    Dapper for .NET Data Access

    Jeff Atwood described the phrase coined by Ted Neward that "Object-Relational Mapping is the Vietnam of Computer Science".

    I agree with everything except Atwood's (huh? 😨- keep in mind this is 2006) conclusion that we should do one or the other: objects or relational data records. Develop apps as a series of SQL data access statements assigning values to arbitrary pieces of monolith application code.. or exclusively object-oriented with everything saved to blob storage... Or something awkward like that.

    That, he says (in the 2006 article*), removes the O (object) - R (relational data) mapping problem entirely. It sure does; but how can we develop apps like that?

    (fast-forward 6yrs, and.... Dapper to the rescue!)

    Dapper is an awesome (IMO) alternative that allows developers to retain SOLID reuse and extensibility in their .NET data access code while still accessing complex relational data- and fast.


    Dapper has the best of both worlds in terms of what you look for in a data access framework - speed and clean, easy SQL-to-typed object mapping facilitation


    I highly recommend the brief peruse; it is a very interesting article. It essentially describes the pitfalls that ADO.NET, Hibernate and Entity SQL (EF for MSSQL) and so many of the other approaches to modeling relational data as .NET objects that have, if not failed completely- severely been lacking especially in terms of speed and control over the actual SQL that you instruct the SQL engine to execute.

    Dapper aims to bridge the eternal gap between application and relational database code in a pretty elegant way for .NET development. So long as your database records (whether from a complex JOIN'd SP or wherever in your db)- can return data with types and field/alias names that match your "query-return-target-type" class' properties' names and data types, you are set for all the kinds of data access you like and are off and running without all the headaches normally associated with ORMs (magic config strings, mappings in separate files out of sync with class or db changes, etc.).

    "there is no good solution to the object/relational mapping problem. There are solutions, sure, but they all involve serious, painful tradeoffs. And the worst part is that you can't usually see the consequences of these tradeoffs until much later in the development cycle." -Jeff Atwood on ORMs

    I guess you could say that the SQL itself in the queries you tell Dapper to issue to MSSQL are "magic strings" insofar as VS doesn't compile them.. But if you don't use SSMS to parse and execute tests of your queries before using them in application code then you aren't really doing real data development- you are just shooting in the dark.

    You should have unit tests for this very purpose. Unit tests of your Dapper calls will catch any db changes in the tests ("hey why did nobody tell me about this schema change in the Archives table?"); regardless- if your SQL field names don't match the class prop names of the object you are trying to "Dapperize"- you will find out at run-time. The exception messages are very "straight to the point of exactly what is off".

    Dapper works the same in all versions of .NET; it is currently based on .NET Standard for that very reason, but you will need to bring in more dependency depending on what type of data source you are trying to access (SQL Server, MySQL, Oracle, DB2, Terradata, etc.).

    Consider giving Dapper a try - it is very useful and illuminating, and it really shines in the very areas where EF falls short.



    Dapper accessing 'UserReport' records from SQL db and returning the dynamic, typed object:
         SqlConnection db = new SqlConnection(WebConfigurationManager.AppSettings["DefaultSQLConnection"]);  
         public List<UserReport> ReadAllSavedUserReports()  
         {  
           using (db)  
           {  
             return db.Query<Report>("SELECT * FROM CLARO.dbo.UserReport").ToList();  
           }  
         }  
         public UserReport FindSavedUserReport(int id)  
         {  
           using (db)  
           {  
             return db.Query<Report>("SELECT * FROM CLARO.dbo.UserReport WHERE Id = @Id", new { id }).SingleOrDefault();  
           }  
         }  
    
    Forgive the "SELECT *.... this is just a demonstration..



    These methods can then easily be called in controller or other code like so:
         public ViewResult Index()  
         {  
           string nowTime = DateTime.Now.ToShortDateString();  
           ReportDAL dal = new ReportDAL();  
           Demo model = BuildModel(BuildSQLStatement(nowTime, ReportDrafts.BaseballDemo), nowTime);  
           model.Reports = dal.ReadAllSavedUserReports();  
           return View(model);  
         }  


    Dapper is not a company trying to sell anything- it is just a really useful micro-ORM for those who prefer to work more hands-on with the SQL in data access code (and like to be able to more granularily control optimization for speedier queries).

    *Atwood helped contribute (with SO) to the development of Dapper, so... I think he and that team kinda nailed the removal and easing of the very same limitations he bemoaned in the article I reference at the beginning: https://stackoverflow.blog/2012/02/18/stack-exchange-open-source-projects/


    References: 

    https://elanderson.net/2019/02/asp-net-core-with-dapper/

    https://dapper-tutorial.net/



    NPV, IRR and Project Viability Evaluation

    Net Present Value (NPV) and Internal Rate of Rerturn (IRR) are quite similar financial expressions.

    In fact the two share the same formula (same variables being measured), but use it to describe the present value of something from 2 different perspectives - (1) what is this project's expected future cashflow currently worth is today's dollars? vs. (2) how profitable (%-wise) will the return on project investment be based on (1)?

    NPV = Net present value is today’s value of the expected future cash flows.


    If NPV is positive, the project is estimated to be profitable



    IRR = The expected rate of return from the proejct.

    If the IRR of a project is higher than the WACC, the project is estimated to be profitable


    The below simple spreadsheet area explains both concepts nicely. This project would generate a $3.7k profit (NPV) over 5yrs and have a significantly profitable 15.64% IRR, higher than the 8% WACC of the 20k invested.



     The project's estimated cash inflows over 5 years would add value, on paper at least


    References:

    https://www.investopedia.com/ask/answers/032615/what-formula-calculating-net-present-value-npv.asp 

    https://www.youtube.com/watch?v=Fw5-wccViOM

    https://www.youtube.com/watch?v=cSAfp6D28RM