Code Readability vs. Code Golf

All developers aim to achieve expressiveness in their code. Which is to say we strive to ensure our code is readable and that we make efficient use of our code so that things are not repeated and the behaviors of interacting objects or functions are clear and easy to understand even for a new developer who has never previously seen the code base.

But sometimes, "expressiveness" can come at a cost. Having readable code means different things to different development teams. But a clear sign of unreadable code (no matter how much IDE screen real estate is saved) is something like this oddly nested structure of if statements.



This code is pretty objectively horrible no matter how you slice it..



If no other developers can understand your code, it won't matter how elegant your code is; it will become difficult to find people who can quickly read it, understand it, and change it. Lacking such ability to transfer the codebase knowledge can turn a great piece of software into millstone for an organization once all of the original developers have left the building.

So in comes my (*biased) opinion of code golf and how it impedes code readability. It's neat and a fun way to enhance your programming skills and knowledge of a language or framework; but it serves as a terrible template for producing readable code that your colleagues can fairly easily look at and understand, or at least debug and determine how the abstractions interact and what pieces of functionality are contained in (or spread across) which files.

Code Golf: https://code-golf.iohttps://codegolf.stackexchange.com  (have fun, you can learn lots)


ASM, clear for some developers; not most:
             global    _start
             section   .text
_start:   mov       rax, 1                    ; system call for write
             mov       rdi, 1                    ; file handle 1 is stdout
             mov       rsi, message         ; address of string to output
             mov       rdx, 13                 ; number of bytes
             syscall                               ; invoke operating system to do the write
             mov       rax, 60                 ; system call for exit
             xor       rdi, rdi                ; exit code 0
             syscall                           ; invoke operating system to exit

             section   .data
message:  db        "Hello, World", 10      ; note the newline at the end

-vs-

Java, unclear 1-liner:
public class StringStatsArray { private final String[] stats; public StringStatsArray(String[] a) { stats =
  a; } public String toString() { String ret = "{"; for (String check: stats) { ret += "\"" + check + "\", "
  ; } ret = ret.substring(0, ret.length() - 2) + "}"; return ret; } public double averageLength() { double
  sum = 0; for (String check: stats) { sum += check.length(); } return sum / (double) stats.length; } public
  int search(String target) { for (int i = 0; i < stats.length; i++) { if (stats[i].equals(target)) { return
  i; } } return -1; } public int sortStatus() { if (stats.length <= 1) { return 1; } if ((int) stats[0]
  .charAt(0) < (int) stats[1].charAt(0)) { for (int i = 0; i < stats.length - 1; i++) { if ((int) stats[i]
  .charAt(0) > (int) stats[i + 1].charAt(0)) { return 0; } } return 1; } else if ((int) stats[0].charAt(0) >
  (int) stats[1].charAt(0)) { for (int i = 0; i < stats.length - 1; i++) { if ((int) stats[i].charAt(0) <
  (int) stats[i + 1].charAt(0)) { return 0; } } return -1; } else { for (int i = 0; i < stats.length - 1; i
  ++) { if ((int) stats[i].charAt(0) < (int) stats[i + 1].charAt(0)) { for (int j = i; j < stats.length - 1;
  j++) { if ((int) stats[j].charAt(0) > (int) stats[j + 1].charAt(0)) { return 0; } } return 1; } if ((int)
  stats[i].charAt(0) > (int) stats[i + 1].charAt(0)) { for (int j = i; j < stats.length - 1; j++) { if ((int
  ) stats[j].charAt(0) < (int) stats[j + 1].charAt(0)) { return 0; } } return -1; } } return 1; } } }

-vs-

C# Minecraft client, clear code:
using (var world = JavaWorld.Connect())
{
    world.PostToChat("Hello from C# and .NET Core!");
    var originBlock = world.GetBlockType(0, 0, 0);
    world.PostToChat($"Origin block is {originBlock}.");
}



"Code formatting is about communication, and communication is the professional developer’s first order of business." -Robert C. Martin (Uncle Bob)


Even complex languages can be simplified somewhat by using logical naming conventions to identify what the entities are and what operations are doing.

Clear and consistent naming conventions of all members within the code of an application go a long way for making your code open to collaboration and/or extension by others. In much the same way people prefer different kinds of poetry, we developers prefer certain programming languages or styles (.NET vs. Java vs. NodeJS vs Python vs. C and C++; Message Queues vs. RDBMS; OO vs. FP) more or less than others. We should however, as far as possible, attempt to make our code clearly formatted and named for anyone to easily interpret. We should be as clear and consistent as possible and only comment when explaining the behavior in variable or function names alone won't suffice.

And this is all not to be confused with just good old fashioned best practices encompassed in the excellent book Clean Code by Robert Martin. Write clean code. Write understandable code with well-named members etc. And for the love of software development (esp. open-source) having a fighting chance to reach its potential in this early part of the 21st century, please do not write code in such a way that serves only to prove how clever you are or obsfuscate what is really going on for reasons unknown. "Security through obsfucation" is a real thing, but it's not a good way to manage a team code base.



Assume that the developers who inherit your code are going to need a little guidance



*true story, I was once interviewed first on the phone, and then on-site for an interesting .NET/SQL Server Dev opening with a major league baseball team. I did relatively well in the 3+ hour interview which included live coding exercises and meeting with all of my potential future colleagues. I went home that day pretty confident that I'd get the job, which in my experience is usually a sure sign that I won't get the job.

The last step for me before the hiring decision was made, arrived in the form of an email which let me know that the other final candidate had submitted a better solution than mine to the final and most difficult coding challenge but only by a couple bytes (16 or 24 bits or something) and that I was going to be given a chance to resubmit my code if I could make it smaller.

The final challenge was to demonstrate some extreme DRY by writing a program that would output the lyrics to "Old McDonald had a farm" in the smallest possible source code file. So, logically I abracted away all the individual characters and repeating character sequences and did my best to make the program as small as possible.

I declined to attempt any improvements. I honestly didn't know how to make it smaller, but more to the heart of the matter, this code golf barrier for entry made me wonder if there were other development practices within the IT department of this organization which I would not be a fit for; so it was probably mutually beneficial that I dropped the ball.

Due to a few .exe filesize bytes... I came in 2nd place and lost out on my dream contract/job. All of this because of my failure to hit a hole-in-one in that darn code golfing test.

My (C#) solution looked somewhat similiar to this:

void x(string c, string d){var a="Old MacDonald had a farm";var b=", E-I-E-I-O";var f=" a ";var g=" there";Debug.WriteLine(a+b+",");Debug.WriteLine("And on that farm he had"+f+c+b+",");Debug.WriteLine("With"+f+d+" "+d+" here and"+f+d+" "+d+g+",");Debug.WriteLine("Here"+f+d+","+g+f+d+", everywhere"+f+d+" "+d+",");Debug.WriteLine(a+b+"!");


I think the actual (C?) solution with the smallest filesize was something like this: 

o(char* x, char* y)
{
    char* f=
        "Old MacDonald had a farm, E-I-E-I-O,\n"
        "And on that farm he had a %s%.13s"
        "With a %s %s here and a %s %s there,\n"
        "Here a %s, there a %s, everywhere a %s %s,\n"
        "%.35s!\n";

    printf(f,x,f+24,y,y,y,y,y,y,y,y,f);
}

Ah well, we live and learn. Development is not a series of 1-upping and out-dueling one another with battles of cleverness. It's an ongoing communication, elaboration and collaboration experiment wherein the ultimate goal is for everyone to understand the domain problems and solutions, domain modeling for the problem abstractions and a standard domain language which removes ambiguity and enables "1000ft" knowledge for even the most tertiery of interested stakeholders.

Some areas of code (for instance the data layers or the UI or the file I/O, etc.) will invaribly be understood by some team members more or less than others. And a team architect will understand everything, though not at the low, "weeds" level of the dev who is implementing a particuliar story or piece of functionality.

Code like you might die tomorrow and someone will have to pick up everything from scratch. Enable frictionless configuration for setting up the local IDE and environment for the first time. Create a useful, concise README.md that includes everything needed to get up and debugging and an explanation of anything that is completely or somewhat non-standard. Include unit and e2e integration tests that 2x-check runtime interactions and clearly demonstrate all behaviors and interactions. 

Above all else: make your application instructions so readable and elegant that perusing the source code is not unlike reading a book and immidiately understanding and visualizing each entity and operation and what typical usage and operation looks like. It's really hard, but it is important for the future that we all get it right and continuously improve our code's understandability.

Happy developing, all.







"All the quirks and buggy behaviours we can exploit in a code-golf challenge allow us to identify with less effort code that will otherwise behave unexpectedly or cause trouble."
...
I’d argue that code-golf can be greatly beneficial as long as we keep our mind in the game and not try using it where it doesn’t belong." -Juan Cortés

Very, very true points in favor of code golfing for fun and deep exploration (even if not the default for real world implementations).


A Timeless Book (by Jerry Fitzpatrick)

TL; DR;

This book gets right to the point of the common aspects required for building great software and is too valuable for any software development professional not to read; and it contains just 172 pages! 😃


Read this book.


Plan Before Implementing

Proper and properly named abstractions to match problem domain and purpose. Thoroughly documented and well-understood behavioral interactions among all accessible components and at minimum a solid high level understanding of all dependencies; know when not to re-invent the wheel and instead utilize a proven 3rd party or open source tool.


Keep it Small

YAGNI ("You ain't gonna need it") is a very real thing. Development teams should focus on working prototypes that can be ironed out for production vs. forever "ideal" implementations that never make it to an actual user's screen.


Write Clearly

Treat a rough draft as a rough draft by encouraging code reviews and frequent minor refactoring to achieve code clarity for the next developers who will inevitably read (and need to understand in order to change or extend) your logic in the future.


Prevent Bugs

Meticulously control scope and member access and understand all ramifications of behavioral code points within the application. Every area of variation/mutation/state change should be covered by tests that are both named and implemented to clearly demonstrate the function/class/behavior being tested.

All tests are important for confidence/assurance in ongoing development but TDD (writing tests before the implementation a la Kent Beck) is not a replacement for good, cohesive design. That being said, I think the TDD backwards approach can help start great design discussion among developers when the design is somewhat unclear (tests, particularly broken tests will expose the innards that most warrant discussing).


Make the Program Robust

Utilize agreed-upon code formatting and pattern implementation standards and non-antagonistic code reviews to ensure these standards are being followed.

Utilize information hiding and useful categorization and encapsulation that extends but does not lend itself to potential client issues.

Do not try/catch everything and log everything unless necessary for compliance or a particuliar initiative being monitored. Make Exceptions very visible (but exit the program gracefully if the exception is not "handleable") and integrate tests to prevent the same (preventable) issues from recurring. Instead use try/finally and let exceptions bubble up to the surface application code.

Exceptions that are allowed to be handled and logged hide problems in the code that need immediate attention.

Use solid CI/CD and automated testing that tests for easily definable application behaviors.


Prevent Excess Coupling

Favor atomic initialization: Initialize everything all at once versus incremental composition.

Discourage unnecessary extensibility points and instead expose what needs to be exposed with as little information sharing as possible.

Strive for immutable members wherever possible.

Cohesive, self-evident abstractions are of utmost importance.


SUMMARY

The concepts so succinctly covered in Timeless Laws of Software Development apply to all types of code (imperative or declarative) and all languages.

While I've read many great software books and all have helped me become a better developer, no text has struck me as so plainly obvious and concise at explaining good software design concepts in such an easy-to-grok manner.

For a worthwhile read, check out Timeless Laws of Software Development by Jerry Fitzpatrick

.NET 5+: the Future of .NET

Microsoft doesn't want to restrict the growth of their highly popular .NET development framework by requiring that it only run on Windows OS and that its components rely on Windows-based components like ASP.NET which relies on IIS.

Enter .NET Core 1, 1.1, 2, 5 (6)+: the Future of .NET.


One .NET Framework to rule them all? No. But a new, unified, cross-platform .NET Runtime? Yes.


With .NET 5 (and coming in hot down the pike, .NET 6), developers can use the languages (C#, F#, VB and C++/CLI) and framework (.NET) they are familiar with, to build applications that can run on a Windows, Linux or Mac OS. While .NET Core 1 and 2 lacked several expected features and felt fairly unsupportable, .NET 5 feels more familiar and addresses most issues (this is just the networking improvements made).

Formerly known as .NET Core 1, .NET Core 1.1 and then .NET Core 2, the future for the .NET Framework and associated runtime libraries will fall under the umbrella of what is simply called ".NET" (5 (current release), 6 (preview), with intermediate minor versions one would expect) as of the time of this writing. ASP.NET Core will continue to be labeled with the "Core" designation for now with the .NET version tagging the runtime (ASP.NET Core Runtime 5, ASP.NET Core Runtime 6, etc.).

There do exist more-than-slightly-subtle differences. For instance ASP.NET Core Runtime 5 projects are structured differently and Global.asax.cs has been replaced with Program.cs and the accompanying Startup.cs which contain the Main entry point and provide spaces for application configuration methods that execute only when your program is initially run (this includes functions to respond to certain events like a unhandled/last chance exception handling, etc.).

    public class Startup
    {
        public Startup(IConfiguration configuration)
        {
            Configuration = configuration;
        }

        public IConfiguration Configuration { get; }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
            services.AddDefaultIdentity<IdentityUser>(options => options.SignIn.RequireConfirmedAccount = true).AddEntityFrameworkStores<ApplicationDbContext>();
            services.AddControllersWithViews();
            services.AddRazorPages();
            services.AddMvc().AddViewLocalization(LanguageViewLocationExpanderFormat.Suffix);
            services.AddLocalization(options => options.ResourcesPath = "Resources");
            services.Configure<RequestLocalizationOptions>(options =>
            {
                options.SetDefaultCulture("en-US");
                options.AddSupportedCultures("en-US", "es-CO", "fr-FR");........

An example of .NET 5 ASP.NET Startup.cs which replaces most Global.asax.cs functionality; Program.cs handles all else

 

In sum total, .NET 5 is basically a framework with the same functionalities that the .NET Framework had, with new namespaces and methods and a few slightly different design paradigms for ASP.NET and other project templates that use .NET 5 code. But it compiles to binary code that can run crossplatform- to a Linux distro or OSX27 or regular old Windows 10, 11, 12, etc.).

.NET 5 seems to favor better encapsulation/information hiding (less extensibility but fewer things to keep track of that can go wrong!) but ASP.NET Core still tends to have the magic string architecture that is its double-edged sword. Everything is a balance; never can be 100% fast, cheap and customizable.

Expect lots more to come now that .NET has it's versioning, cross-platform integration and road map in order



Virtually everything is customizable if you want to use your own implementation of a SqlClient RDBMS provider or Authentication provider. Setting up social media authentication is relatively simple. For relational database access, Dapper is a nice balance between ADO.NET with pure SQL statements and an ORM for object population/deserialization, etc. and compiles to .NET Standard which can be referenced and compiled by a .NET 5 project.

With SQL Server moving to Linux world I imagine many other useful services that used to rely solely on Windows-OS will follow: native MS Office, MS Dynamics and related dependencies, IIS, etc.

Where .NET really shines is in its ability to compile/integrate .NET code libraries (with the notable exception of server-side WCF)  built with .NET Framework 4.5-4.8 as well as Xamarin, Mono, Unity and other previous .NET-related runtime libraries. It uses what is called .NET Standard (a collection of commonly used and .NET 5-compatible .NET Framework  members) to achieve this bridging between .NET and all .NET Framework versions preceding it.

Perhaps not all of that old .NET 4.5-4.8 legacy code will need to go to waste.


All SDKs under one roof; what is not to like? Almost as great as "Write Once.. Run Anywhere" 


Long-term support for .NET 5's predecessor, .NET Framework will continue for versions 4.6.2 and above for the foreseeable future while prior versions have either already ended LTS or will be ending LTS soon, per Microsoft:

Support for .NET Framework 4, 4.5, and 4.5.1 ended on January 12, 2016.

Support for .NET Framework 4.5.2, 4.6, and 4.6.1 will end on April 26, 2022. Customers and developers must have completed the in-place update to .NET Framework 4.6.2 or later by April 26, 2022 to continue receiving technical support and security updates.

Marginal Cost Analysis

Illustration of MC:MR curve (Supply curve); average total (ATC) and variable costs (AVC) can be analyzed as well


Marginal Cost (MC) is "the cost added by producing one additional unit of a product or service." The analysis of MC is used heavily in all departments that have a need to maximize output (sales) and minimize input (dollars spent)

So-called "what if scenarios" exist everywhere in modern business management and administration. They are largely based on the relationships of MC variables to demand. "What if we lowered the price of our product from $8 to $5 in this market, what will happen?"

Depending on the demand in that market, the marginal cost (MC), and thus the maximum possible profit point could rise (if demand increases are less than the $3 per unit less being recovered in sales) or lower (if the lowered price leads to higher demand than the demand at the $8 price).

Whether the maximum possible profit point rises or lowers depends on how much demand is associated with the product when it is sold at $5. Assuming the business can produce more of the item per day at $5, unless demand increases production quantity so much that the maximum possible profit is higher at $5 than it is at the $8 price, the decision to lower the price from $8 to $5 is not a profitable one.

Often, companies find themselves with the luxury of being able to lower prices to generate high demand because their production costs are so low when producing at such high scale (what is known as possessing "economies of scale") or because their advertising costs are so low or because they can simply afford to take on losses to break into the market with hopes that they can someday achieve the economies of scale of like those of the market winners once they have established themselves as a legitimate market competitor.


Marginal Cost by Quantity equals changes in Cost over changes in Quantity


Break-Even is defined as "the amount of money, or change in value, for which an asset must be sold to cover the costs of acquiring and owning it. It can also refer to the amount of money for which a product or service must be sold to cover the costs of manufacturing or providing it."

Many businesses do not make much money. However, generating enough income to pay all of your employees and utilities and taxes in addition to all the business expenses that generate economic activity- is not an easy feat and is in no way a failure (a business failure is a business that only ever consistently loses money- never breaking even or generating profit).

A simple illustration of the break-even concept is if an individual bought 90 cans of Pepsi in advance of a big event in their town. If the purchase price for the bulk 90 cans (3 30-packs) was $45 and they sold the individual cans to event-goers for $5/ea, they would only have to sell 9 Pepsis to breakeven- the remainder (minus applicable permit or taxes)- is profit 45 / x(5) = 1 where x = 9


Traditional Make vs. Buy:
Companies are frequently faced with the decision to create something within the company or opt to purchase from a third party vendor. The latter usually makes sense; the former only makes sense if the company can consistently produce a material or service input at a lower price than could be paid to a third party.

Additionally, any in-house materials development should not interfere with on-going production operations.


In Production Planning:
With machines sitting largely idle unless there is sufficient capacity for them to process, a manufacturing manager has to contest with a host of variables in configuring the production floor and which machines will be used to maximize production of the amount they need to deliver each interval (day, week, month, year). 


In CPQ and Pricing Analysis:
The price point at which there is an intersection of MC and MR is known as Price Equilibrium (PE). Quantity produced and sold at the PE ("Maximum Possible profit" in the chart above) maximizes profit as PE represents the price and output level at which MC is lowest.


Other notes:
Federal, State and local governments can institute price limits (also known as price floors) which restrict the ability of a producer to recover all of the profit area they might gain for product sold at prices in excess of the price equilibrium. Selling at the price equilibrium is selling at the "break-even" point.

The ability to isolate P, MC, MR, Q and analyze their response to changes in the other variables is an invaluable tool for organizational and financial planning.

References:




Custom SSRS Authentication Extended

One of the limitations of SSRS is that it cannot be used outside of a Windows environment (security is all dependent on Active Directory and Windows User accounts). Unfortunately app impersonation does not work cleanly enough as it will prompt users for credentials when they 1st authenticate to the Report Server. That doesn't cut the mustard for user expectations of public-facing apps.



Authentication only works as far its interoperability can reach (needs to reach beyond Windows Auth)



So, to get around this, Microsoft has provided a workaround in the form of the CustomAuthentication example. It provides a basic way to authenticate using forms-based authentication with a login page. This does work, "technically". But this also will not work if we want our report authentication to be invisible/seamless to a user who is already authenticated to the main app (that provides SSRS-based reporting features).

Why make a user auth 2x? 

So.... (and this isn't incredibly clever but it's useful): enter this extension of the CustomAuthentication example that is Local-only access by default, but suggests several overrides for authentication.

Here is the crux of how the CustomAuth can work in many different ways simply via modifying Login.aspx.cs of the Microsoft example:


        private void Page_Load(object sender, EventArgs e)
        {
            //Your secret authentication sauce goes here..
            //appHash should get dynamically generated from the app calling SSRS (ideally for each request if performant enough)
            //ie. 
            //if (CheckAuth(System.Web.HttpContext.Current.Request.Cookies["origAppHash"].ToString()))
            //if (CheckAuth(System.Web.HttpContext.Current.Session["otroAppHash"].ToString()))
            if (System.Web.HttpContext.Current.Request.IsLocal)
                FormsAuthentication.RedirectFromLoginPage("daylite", true);
        }

        private bool CheckAuth(string appHash)
        {
            //DecodeAndCryptoChecks on appHash
            return true;
        }

 MS' Example uses Page_Load(); presumably Page_PreLoad() or Page_Init() would also work here- 'just an HttpRequest eval



The idea behind this branch (closed as soon as PR'd as I don't expect MS to integrate this but I did want to not-so-subtly nudge them to explain SSRS' custom extensibility better) is not a defined solution- it is to demonstrate how you can interface with the authentication and authorization operations of SSRS service to achieve virtually any kind of custom security behavior or compatibility that you require.




GitHub (my sonrai LLC is my contractor/consultant LLC): https://github.com/sonrai-LLC/ExtRSAuth

EDI, RPC, SOAP, MQ, REST and Interoperability

All of these concepts help to address the same concern: how do we move data from System A to System B when these systems have no direct linkage (no common data store)? The following are a few of the technologies that have served as answers to this question.


There was a different kind of web back in the way day


EDI (Electronic Data Interchange): An exchange of data usually large in volume in comparison to other remote data transfer methods (batched records of 1000s vs. 1 record of JSON or 1 row of an RDBMS table), and usually done in conjunction with some kind of an ETL and/or Data Warehousing process. EDI is typically used for large, domain-specific transactions and the data transfer itself is performed over SFTP or another secure file transfer protocol and utilizes XSLT for data formatting. EDI files must adhere to strict ISO formatting specifications. This is helpful (and coincidentally adds a layer of complexity for hackers) when trying to ensure that a large number of disparate parties reporting data are all sending data in the right format as, if an EDI file's data format is wrong in any way, it won't be accepted at the destination.


This is an example of an EDI "EDIFACT" formatted file


RPC (Remote Procedure Call): Highly-coupled abstraction (if you can call it abstraction- it's really more of a video game accessory that only works on certain consoles) that essentially requires the client and server to be running the same program which, while once upon a time was feasible (and in some cases may be desirable for channel security), is not typically the ideal way to communicate openly. However, for closed, secure communications, RPC is still very much a part of the many technologies that facilitate secure messaging in applications like Telegram, Signal and the like.


As stated, RPC implies client-server sharing code (see "RPC thread" spanning above)


SOAP (Simple Object Access Protocol): This had been the standard for web services (indeed it is why Microsoft created WCF) until HTTP-based/RESTful APIs replaced them as the standard choice among developers of newer projects around the early 2010's. It is self-describing (.wsdl) and allows for communication over virtually any point-to-point communications protocol. SOAP is however quite prescriptive in the way it dictates how SOAP message "objects" are defined, leading to a lot of (interface) metadata inside the envelope that may have little to do with the task at hand but which is needed so that the client can understand the message and deserialize the object if necessary.


An example of a faulting SOAP call's SOAP response


MQ (Message Queuing): the primary concept of utilizing message queues and exchanges is the asynchronous nature in which the messages are pulled and pushed vs. a REST or SOAP service call which are request/response synchronous by design.

This architectural data model also supports highly-decoupled design whereby many applications, all written in different languages and under disparate frameworks can utilize the same MQ Exchange and share communication across queues.

Frameworks like RabbitMQ facilitate event sourcing design with queues; an app is often both a Producer and Consumer



REST (REpresentational State Transfer) APIs: Operating completely (and solely) over HTTP(S) and via (primarily) GET/POST actions which have already undergone some 35 years of incremental improvement, for as long as the web lives on, REST APIs will be at its foundation. They aren't self-describing though descriptive metadata can be embedded in the naming of the API resources to achieve similar reflection. Additionally, there are usually descriptive, interactive specifications for large publicly hosted APIs like the ones from Google Maps and Twitter. RESTful APIs are not highly prescriptive in the API structure/operations. It just has to be an HTTP action method that any HTTP client would understand. Most APIs default to passing JSON around when objects are involved in POST arguments or GET return values; but there is no reason you cannot return XML. Or a file. Or a streaming video. Or whatever floats your software ship. People create RESTful API wrappers for SOAP services all the time.



Just leaving the REST for last.. 😉


Although many in the software development community prefer the use of RESTful APIs, Message Queuing or some combination of the 2 for new projects, we must be mindful of the fairly recent past which has littered the landscape with SOAP, EDI, ETL and an assortment of proprietary and highly customized RPC (still active) communication channels (for example SOAP streaming over UDP).

There was a time before the web as we know it today when machines like ATMs and TicketMaster were still interconnected just as ever. However these connections were not regular TCP HTTP packets traveling to and from port 80 or 443 but rather fixed length TCP frames of ATM or another early file transfer protocol. And many of those ATM and TicketMaster connections still exist, even if upgraded for modern times via something like WCF (.NET) or JAX-WS (Java).

There are certain things only SOAP can do. There are certain legacy systems which will not be updated any time soon (because "if it ain't broke") that still need to interface with SOAP clients. As technologists we have to deal with this and understand the tradeoffs of using different frameworks for different jobs. 

In the same way that there is no perfect language for every scenario, no one way of electronically transferring data and interacting with remote systems is always always the "best way" (although REST APIs come pretty close as so much our connected world is now http-based).

The best choice for sending remote communications just like any choice of framework, language or design paradigm is never fixed. The answer requires careful, domain-centric, thorough analysis of the problem and the resources available to resolve that problem. In software development, the answer to "which way is the best way?" is invariably- "it depends".





"Next Big" Software Religiosity and The Go-nowhere Rush

There is far too much religious extremism in information technology these days. And there have always been camps (extreme anti-Microsoft sentiment or its sad corporate counterpart: disdain, fear and suspicion of all things open-source)- but these days it has gotten to the point where sensible, cheap, reliable, proven solutions that everyone on the team understands- are thrown out in favor of chasing the next big thing that some bigshot at some big conference declared was going to be the next, next, next "big thing".


This image does have its merits..


Amid all the continuous rush to be cutting edge despite understanding what that edge can do for you and having a strong data foundation to build upon with that new cutting edge thing- it doesn't matter what tools are out. You are still stuck with ideas and not programs.

Design and develop with what works for your particular team and project and within the context of the environments of your stakeholders (if all but 2% of your customers use Android then the iPhone version of your app may not be as important as you think). Above all else, make sure you understand the domain knowledge behind the data your application will be persisting and passing around. That (the data understanding) is the heart of every program that stores, processes, transmits or even simply reads/prints/paints- any kind of communication.

Data sense-making and software development is hard work. And it's not done in a void. I suggest reading Stephen Few's "Big Data, Big Dupe" which is a little paperback containing 90 some pages of important wisdom for this modern rapid-fire information age that pre-empts knowledge of data in favor of slogans and metrics about data.

In short, the essence of this book is that if you have say 10TB of crap data that is always causing ETL failures that your personnel spend countless hours trying to correct... you may indeed have "big data" per some misguided tech journalist's definition... but you still have crap data-- understand your data before you try understanding how best to fit it inside of the newest shiny box.

Take also for example message queues and their usage in modern web application development. There seems to be a lot of misunderstanding about what MQ is and even some who claim this is a new technology (MSMQ has been around since Windows '95; IBM MQ has been in use since 1993). Basic email has operated on a publisher/subscriber (ICMP or SMTP) messaging queue paradigm that works in much the same way as modern MQ implementations (minus some bells and whistles)- since the early 70's.

These things aren't as complicated as they seem but they are complicated. And it's perilous to keep jumping from new trick to new trick whilst ignoring foundational, timeless software principles.

I would go so far as to say it is injurious to current and future generations of software developers to keep focusing on buzzwords, zooming out and away from the hard-but-necessary work of understanding the data, and then wondering why the tool or framework flavor of the year did not save the day.

Getting Familiar with Microsoft Azure

I'd like to summarize what I've learned in the past couple years of using MS Azure for personal and professional software development. Keep in mind this is coming from the perspective of a developer; Azure can be used for many interesting things outside the scope of just deploying, hosting, and scaling software in the cloud.


The Azure Portal UI is intuitive, constantly being updated (for the better), and contains tools to create and configure nearly anything you can imagine  


First, it's a bit of a maze.

Then it's amazing.


Starting out

The idea of any cloud provider is to enable IaaS, SaaS and PaaS among other XaaS's. Instead of having to provision physical machines, network equipment and associated hardware, and go out to dozens of different vendors to manage service agreements for the various services a company uses- nowadays a company can move most of that distributed mess into their own private cloud and just manage everything in one place.

And that one place is highly secure, geo-redundant and hosted on some of the best and newest hardware available.





Azure resources

In Azure you have the concept of resources which consume resource units. Anything can be a resource: a network card, a virtual machine, a firewall security policy- they are all resources in the world of Azure. You can create, modify and delete resources virtually at will- or on a schedule through automation scripts that operate on what are known as ARM (Azure Resource Manager) templates which are basically representations of Azure resources in the form of JSON.

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "adminUsername": {
      "type": "string",
      "metadata": {
        "description": "Username for the Virtual Machine."
      }
    },
    "adminPassword": {
      "type": "securestring",
      "minLength": 12,
      "metadata": {
        "description": "Password for the Virtual Machine."
      }
    },
    "dnsLabelPrefix": {
      "type": "string",
      "defaultValue": "[toLower(concat(parameters('vmName'),'-', uniqueString(resourceGroup().id, parameters('vmName'))))]",
      "metadata": {
        "description": "Unique DNS Name for the Public IP used to access the Virtual Machine."
      }
    },
    "publicIpName": {
      "type": "string",
      "defaultValue": "myPublicIP",
      "metadata": {
        "description": "Name for the Public IP used to access the Virtual Machine."
      }
    },
    "publicIPAllocationMethod": {
      "type": "string",
      "defaultValue": "Dynamic",
      "allowedValues": [
        "Dynamic",
        "Static"
      ],
      "metadata": {
        "description": "Allocation method for the Public IP used to access the Virtual Machine."
      }
    },
    "publicIpSku": {
      "type": "string",
      "defaultValue": "Basic",
      "allowedValues": [
        "Basic",
        "Standard"
      ],
      "metadata": {
        "description": "SKU for the Public IP used to access the Virtual Machine."
      }
    },

    "OSVersion": {
      "type": "string",
      "defaultValue": "2019-Datacenter",
      "allowedValues": [
        "2008-R2-SP1",
        "2012-Datacenter",
        "2012-R2-Datacenter",
        "2016-Nano-Server",
        "2016-Datacenter-with-Containers",
        "2016-Datacenter",
        "2019-Datacenter",
        "2019-Datacenter-Core",
        "2019-Datacenter-Core-smalldisk",
        "2019-Datacenter-Core-with-Containers",
        "2019-Datacenter-Core-with-Containers-smalldisk",
        "2019-Datacenter-smalldisk",
        "2019-Datacenter-with-Containers",
        "2019-Datacenter-with-Containers-smalldisk"
      ],
      "metadata": {
        "description": "The Windows version for the VM. This will pick a fully patched image of this given Windows version."
      }
    },
    "vmSize": {
      "type": "string",
      "defaultValue": "Standard_D2_v3",
      "metadata": {
        "description": "Size of the virtual machine."
      }
    },
    "location": {
      "type": "string",
      "defaultValue": "[resourceGroup().location]",
      "metadata": {
        "description": "Location for all resources."
      }
    },
    "vmName": {
      "type": "string",
      "defaultValue": "simple-vm",
      "metadata": {
        "description": "Location for all resources."
      }
    }
  },
  "variables": {
    "storageAccountName": "[concat('bootdiags', uniquestring(resourceGroup().id))]",
    "nicName": "myVMNic",
    "addressPrefix": "10.0.0.0/16",
    "subnetName": "Subnet",
    "subnetPrefix": "10.0.0.0/24",
    "virtualNetworkName": "MyVNET",
    "subnetRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', variables('virtualNetworkName'), variables('subnetName'))]",
    "networkSecurityGroupName": "default-NSG"
  },
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2019-06-01",
      "name": "[variables('storageAccountName')]",
      "location": "[parameters('location')]",
      "sku": {
        "name": "Standard_LRS"
      },
      "kind": "Storage",
      "properties": {}
    },
    {
      "type": "Microsoft.Network/publicIPAddresses",
      "apiVersion": "2020-06-01",
      "name": "[parameters('publicIPName')]",
      "location": "[parameters('location')]",
      "sku": {
        "name": "[parameters('publicIpSku')]"
      },
      "properties": {
        "publicIPAllocationMethod": "[parameters('publicIPAllocationMethod')]",
        "dnsSettings": {
          "domainNameLabel": "[parameters('dnsLabelPrefix')]"
        }
      }
    },
    {
      "type": "Microsoft.Network/networkSecurityGroups",
      "apiVersion": "2020-06-01",
      "name": "[variables('networkSecurityGroupName')]",
      "location": "[parameters('location')]",
      "properties": {
        "securityRules": [
          {
            "name": "default-allow-3389",
            "properties": {
              "priority": 1000,
              "access": "Allow",
              "direction": "Inbound",
              "destinationPortRange": "3389",
              "protocol": "Tcp",
              "sourcePortRange": "*",
              "sourceAddressPrefix": "*",
              "destinationAddressPrefix": "*"
            }
          }
        ]
      }
    },
    {
      "type": "Microsoft.Network/virtualNetworks",
      "apiVersion": "2020-06-01",
      "name": "[variables('virtualNetworkName')]",
      "location": "[parameters('location')]",
      "dependsOn": [
        "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
      ],
      "properties": {
        "addressSpace": {
          "addressPrefixes": [
            "[variables('addressPrefix')]"
          ]
        },
        "subnets": [
          {
            "name": "[variables('subnetName')]",
            "properties": {
              "addressPrefix": "[variables('subnetPrefix')]",
              "networkSecurityGroup": {
                "id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
              }
            }
          }
        ]
      }
    },
    {
      "type": "Microsoft.Network/networkInterfaces",
      "apiVersion": "2020-06-01",
      "name": "[variables('nicName')]",
      "location": "[parameters('location')]",
      "dependsOn": [
        "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPName'))]",
        "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]"
      ],
      "properties": {
        "ipConfigurations": [
          {
            "name": "ipconfig1",
            "properties": {
              "privateIPAllocationMethod": "Dynamic",
              "publicIPAddress": {
                "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPName'))]"
              },
              "subnet": {
                "id": "[variables('subnetRef')]"
              }
            }
          }
        ]
      }
    },
    {
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2018-10-01",
      "name": "[parameters('vmName')]",
      "location": "[parameters('location')]",
      "dependsOn": [
        "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
        "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
      ],
      "properties": {
        "hardwareProfile": {
          "vmSize": "[parameters('vmSize')]"
        },
        "osProfile": {
          "computerName": "[parameters('vmName')]",
          "adminUsername": "[parameters('adminUsername')]",
          "adminPassword": "[parameters('adminPassword')]"
        },
        "storageProfile": {
          "imageReference": {
            "publisher": "MicrosoftWindowsServer",
            "offer": "WindowsServer",
            "sku": "[parameters('OSVersion')]",
            "version": "latest"
          },
          "osDisk": {
            "createOption": "FromImage",
            "managedDisk": {
              "storageAccountType": "StandardSSD_LRS"
            }
          },
          "dataDisks": [
            {
              "diskSizeGB": 1023,
              "lun": 0,
              "createOption": "Empty"
            }
          ]
        },
        "networkProfile": {
          "networkInterfaces": [
            {
              "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
            }
          ]
        },
        "diagnosticsProfile": {
          "bootDiagnostics": {
            "enabled": true,
            "storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))).primaryEndpoints.blob]"
          }
        }
      }
    }
  ],
  "outputs": {
    "hostname": {
      "type": "string",
      "value": "[reference(parameters('publicIPName')).dnsSettings.fqdn]"
    }
  }
}
An example of an ARM template- this one is for deploying/updating a Windows Server VM resource


For a trial period of (currently 12 months) most all of the really useful stuff is free (be careful not to accidentally deploy Azure Co$mos though...😳 ...that is not free, and that is not cheap). After the trial period, the cost was still relatively cheap for the services that I use most in Azure (an App Service hosting a handful of .NET Core apps with SSL, 1 powerful virtual machine, a DNS zone, a vNet)- all for about $30/month.


Development

For development, much like it is with Git, the Visual Studio integration with Azure is pretty seamless and enables deployments directly from the IDE. You can also enable an Azure object explorer to view your Azure cloud instance' resources within VS.

Most all established companies are going to want to- for security reasons- (or will have to for incompatibility reasons)- keep at least some legacy software and/or infrastructure on-prem.


Azure ARC connects your On-Prem to your Cloud



And that is why there is Azure ARC- an incredibly simple way to bridge cloud and on-prem resources to create a hybrid virtual network. ARC is essentially a service that you run on your on-prem machines that connects them to your Azure subscription where the machines can then be configured as if they were an Azure resource and can enable on-prem devices to communicate with cloud resources.

"Arc works by running an agent on your non-azure resources; this is a service on VM's and a Kubernetes pod on Kubernetes cluster. Once you install this service, the machine registers with Azure and is ready for management." -samcogan.com

Additionally, virtual machines (Windows OSs or approved Linux distros) can be accessed via SSH or RDP and are as amazingly fast or as tortuously slow as you configure them to be. You can choose from preconfigured database server or application server templates or build your virtual machines completely à la carte.

The ARM template paradigm is easy to understand and develop with, and there are 2 CLI options- Azure CLI and the new PowerShell "Az" module.


How to see "gains"

To see savings from using the cloud, instead of purchasing a new server or physical license, you can rent the computing power you need to power your apps and services and you can even move your worker machines onto the cloud where they can be more easily managed (we are indeed moving back to a thin client/dumb terminal world).

You can move from the physical Exchange mail server model to Outlook365. You can move all of your physical Office subscriptions to Office365.

If your computing needs are seasonal or time-sensitive, you can scale up when needed and pay a high price for short bursts of computing power, while scaling back down to a much lower-budget level until the next scale-up need arises. The configuration of the usage of resources in Azure is highly granular and lends itself to squeezing out a lot of efficiency for those who can monitor and manage it correctly in accordance with organizational needs.

Azure Hybrid Benefit also provides credits for customers who already have an on-prem SQL Server software license. Want to see that MSSQL2019 Enterprise for Linux instance in the cloud? 🙂


Monitor usage

Monitor your cloud resource usage as you can inadvertently requisition a resource that behaves in ways you did not expect and in turn end up ringing up a lot of expensive RUs (resource units). Azure allows you to configure a budget and alerts when you have reached certain thresholds toward or beyond the budget number so that you can configure an alert which will email you a warning message if you have reached 105% of your monthly budget, for example.


The current Azure offerings are plentiful and powerful enough to outfit even the most complex IT infrastructure

Azure, like any cloud provider, forces you to take a fine-grain look at every single resource you are using. It is amazing how much stuff we don't actually use.

It is only when you begin to pay for usage of each resource and see the numbers rising daily do you really understand how much you are utilizing your various resources.

Powerful computing machines began as a timeshare because of the realization that it is madness to let expensive machines sit idle. And though the resources to share and provision among users has become far more complex, we are returning to that model.


Conclusion

Whether you use Azure to explore different kinds of technology or to implement an IT infrastructure completely in the cloud to connect and supercharge your applications and/or workforce- the tech is now there and the costs are comparable to AWS.

My two criticisms of Azure are that (1) Azure seems to excessively spotlight/push certain features forward and other features (many practical, free and cheap things you would think are "essential"- like setting up DNS zones) remain sort of in the shadows/awaiting help links waiting to be discovered... And (2), sadly other things, little things like logging analytic insights that you would think are free are in fact Azure resources that charge RUs. 😕

These aspects suck but are tolerable in light of all of the awesome functionality Azure provides.

Microsoft continues to improve an industry-leading cloud platform that executives, management, engineers, developers, and system admins alike can all learn to love. 💖

PowerShell Commands

The origins of PowerShell lie in the Monad project which you can learn about here: https://www.jsnover.com/Docs/MonadManifesto.pdf


PowerShell Base CmdLets and associated Pipeline parsing CmdLets  provide powerful Windows administration tools



These can be used on-the-fly to glean and share information about a problem or the state of your machine(s) and network or they can be crafted into useful scripts that run on a schedule to report on the status of applications and services, run backup and ETL tasks as well as myriad other (often critical) scheduled jobs that happen routinely behind the scenes to keep IT operations organized and running.
You may for instance, want to have a script run every few hours that gathers statistics about throughput and storage and then alert admin users if a certain threshold is exceeded. Or, in Azure, you may want to utilize a scripted template (like Azure ARM and associated CLI commands) to configure new Azure resources and their environments.


Useful cmdlets:

#check security level
Get-ExecutionPolicy

#elevate security access level
Set-ExecutionPolicy Unrestricted

#get information on any service
Get-Service -Name PhoneSvc

#get the same log info seen in eventvwr
Get-EventLog -Log "Application" 

#get process information
Get-Process -ComputerName MYCOMPUTER

#stop process like cmd.exe kill
Stop-Process -Name “notepad”

#get drive information of the drives connected to the current PS session
Get-PSDrive 

#get information on any powershell cmdlt, function or module
Get-Help -Name Streaming

#get all the installed commands
Get-Command

#connect to your azure account with the "az" azure cmdlt Connect-AzureRmAccount #upload blob content to storage Set-AzStorageBlobContent -File "D:\_TestImages\Image002.png" ` -Container $containerName ` -Blob "Image002.png" ` -Context $ctx -StandardBlobTier Cool #download blob content from storage Get-AzStorageBlobContent -Blob "Image002.png" ` -Container $containerName ` -Destination "D:\_TestImages\Downloads\" ` -Context $ctx

#stop a sql server instance
Stop-SqlInstance -ServerInstance MSSQL01

#clear screen
Clear-Host

#ping
Test-NetConnection

#telnet
Test-NetConnection -Port

#tracert
Test-NetConnection -TraceRoute

#ipconfig
Get-NetIPAddress

#nslookup
Resolve-DnsName -Name "Hostname"

#netstat
Get-NetTCPConnection

#flushdns
Clear-DnsClientCache

#ip release/renew
Invoke-Command -ComputerName -ScriptBlock {ipconfig /release} Invoke-Command -ComputerName -ScriptBlock {ipconfig /renew}

#disable/enable network card
Invoke-Command -ComputerName -ScriptBlock {ipconfig /release} Invoke-Command -ComputerName -ScriptBlock {ipconfig /renew}



Additionally, it is often useful to implement piping of command output, especially in CI/CD toolchain scripting where scripts feed their output (which become the next script's argument(s)) to the next script in the chain.


For example: 

#export a list to .csv file
Get-Service | Export-CSV c:\20200912_ServiceSnapshot.csv

#be more selective with Select-Object module and pipe that to the csv
Get-Service | Select-Object Name, Status | Export-CSV c:\20200912_ServiceStatusSnapshot.csv

#get event information and pipe method (of each log event) info to the console
Get-EventLog -log system | gm -MemberType Methods

#get a process and stop it
Get-Process notepad | Stop-Process

#delete all files matching some Regex pattern
Get-ChildItem $Path | Where{$_.Name -Match "someFileName.txt"} | Remove-Item



References: