You need to eat your own dog food

I recently realized how useless my custom .NET method, "Sonrai.ExtRS.ReferenceDataService.GetGoogleNews(string searchTerm)", really was.

I had referenced Sonrai.ExtRS via Nuget and began trying to use it to improve the display in scrolling news links, in another application I maintain, tickertapes.net.   

It was useless! I mean a complete waste of code. It returned potentially useful data related to the searchTerm parameter- data from the GoogleNews API. But that is about it. It literally returned the entire XML response. 🤦‍♂️


This was the raw oil, but not the refined gasoline a consumer of a Nuget package expects...


I was parsing all this XML in the tickertapes.net app (why I do not know or care to remember), and so all the work necessary to wrangle the XML response to produce the needed news link collection was on the consumer of the Sonrai.ExtRS Nuget package- not good!

What the Nuget library should do, is all the work. What it should return, to be actually useful, is something structured like a collection of strings, each one representing a single news article, with the news headline being the text for the news article link.

And so I moved back to Sonrai.ExtRS to correct this unfortunate oversight.  

  public static async Task<List<string>> GetGoogleNewsWithLinks(string search)  
  {  
    HttpClient client = new HttpClient();  
    var content = await client.GetStringAsync(string.Format("https://news.google.com/rss/search?q={0}", search));  
    var parser = new HtmlParser();  
    var document = parser.ParseDocument(content);  
    var newsItems = document.All.Where(m => m.LocalName == "title").ToList();  
    var linkItems = document.All.Where(x => x.LocalName == "link").ToList();  
    var newsLinkItems = new List<string>();  
    for (int i = 0; i < newsItems.Count; i++)  
    {  
      newsLinkItems.Add("<a href='" + linkItems[i].NextSibling!.NodeValue + "' target='_blank'>" + newsItems[i].InnerHtml + "</a>");  
    }  
    return newsLinkItems;  
  }  


And this provides the Nuget client with something it can actually use, "turnkey"/out-of-the-box.

A list of HTML links which we can actually use; this is more like it.


The moral of the story is one that is as old as business and manufacturing: you must eat your own dog food. If there are problems or areas that need attention it is best that you find this information out before your product is released into the wild and a customer discovers the bug (or in this case, the uselessness), thus sullying your reputation as a business and software provider.

Test, test, test- always unit and/or integration test every user interaction and data movement for every story/path imaginable or support-able.

But there is nothing that replaces simply using your own product the way it is used by real users. And really using it for the purpose it was made. What you discover may help you shore up previously unknown problems and/or inspire you to make something useful that you would never otherwise think of, unless you were thinking from a user's perspective.


Reference: https://www.nuget.org/packages/Sonrai.ExtRS


PS: Imagine if James Newton-King never used NewtonSoft.Json for his own de/serializations? Or if Stack Overflow never used Dapper for the SO app/site? You need to use your creations right at the ground level (as a user/client) to verify that the functionality you designed in the abstract exactly matches how things will play out in concrete reality. Thoughts.. 💭

How to use .NET User Secrets in MSTest classes

When developing unit and integration tests, we don't necessarily want to share the secret keys and values we use as credentials for APIs, databases, etc.

So similarly to how we implement User Secret functionality in Program.cs, we can implement User Secrets and set test class variables that use the secrets via the MSTest classes' constructor (ReferenceDataTests() below):

  public static string upsId = "";  
  public static string upsSecret = "";  
  private IConfiguration _configuration { get; }  
  
  public ReferenceDataTests()  
  {  
    // set your API ids and secrets in UserSecrets (right-click project: "Manage User Secrets")  
    var builder = new ConfigurationBuilder()  
      .AddUserSecrets<ReferenceDataTests>();  
    _configuration = builder.Build(); 
    
    var secretVals = _configuration.GetChildren().ToList();  
    upsId = secretVals.Where(x => x.Key == "upsId").First().Value!;  
    upsSecret = secretVals.Where(x => x.Key == "upsSecret").First().Value!;   
  }  




ASCII art fun with .NET

My friend Benn with a nice largemouth bass

And here we have a simple function. Many thanks to its originator, Thinathayalan Ganesan.

You can find this in the Sonrai.ExtRS Nuget project under Sonrai.ExtRS.FormattingService.ConvertToAscii(Bitmap image) and see how it is used in Sonrai.ExtRS.FormattingTests.ConvertToAsciiSucceeds.

I talked a bit about how ASCII art works in a post on a similar Python script.

  // credit (Thinathayalan Ganesan): https://www.c-sharpcorner.com/article/generating-ascii-art-from-an-image-using-C-Sharp  
  public static string ConvertToAscii(Bitmap image)  
  {  
    string[] _AsciiChars = { "#", "#", "@", "%", "=", "+", "*", ":", "-", ".", "&nbsp;" };  
    Boolean toggle = false;  
    StringBuilder sb = new StringBuilder();  
    for (int h = 0; h < image.Height; h++)  
    {  
      for (int w = 0; w < image.Width; w++)  
      {  
        Color pixelColor = image.GetPixel(w, h);  
        //Average out the RGB components to find the Gray Color  
        int red = (pixelColor.R + pixelColor.G + pixelColor.B) / 3;  
        int green = (pixelColor.R + pixelColor.G + pixelColor.B) / 3;  
        int blue = (pixelColor.R + pixelColor.G + pixelColor.B) / 3;  
        Color grayColor = Color.FromArgb(red, green, blue);  
        //Use the toggle flag to minimize height-wise stretch  
        if (!toggle)  
        {  
          int index = (grayColor.R * 10) / 255;  
          sb.Append(_AsciiChars[index]);  
        }  
      }  
      if (!toggle)  
      {  
        sb.Append("\r\n");  
        toggle = true;  
      }  
      else  
      {  
        toggle = false;  
      }  
    }  
    return sb.ToString();  
  }  

The key is the assignment, pixel-by-pixel, of values to the reb, blue and green variables (and then "grayColor" variable) under the comment "Average out the RGB components to find the Gray Color". By getting the average of all colors in the pixel you can get the grayScale RGB color from Color.FromArgb(R, G, B). This grayscale RGB color is then used to select the appropriate ASCII character to represent the shade of gray in each pixel of the image.

  • A darker pixel of an image will use an ASCII character like a "." or "-" or ":".
  • A lighter pixel of an image will use an ASCII character like a "#" or "@" or "%".

In this way we can easily convert an image from its pixel-based source representation to an ASCII character representation. Essentially, a computer image is just a mosaic, or a composite of parts (pixels usually, sometimes ASCII characters- for art and fun!). 😃


Reference: https://www.c-sharpcorner.com/article/generating-ascii-art-from-an-image-using-C-Sharp

extRS Portal: a modern SSRS client



ExtRS Portal provides a blueprint for extending the functionality of Reporting Services 


 
extRS (pronounced, "extras") is a modern SSRS client for distributing and reading reports; with some extras. A demo of the app is linked here: https://extrs.net
   
The audience is SSRS report users (you know, the people you need to justify having enterprise reporting in the first place). So things like applying item-level RS security, managing users, and adding, editing and deleting SSRS catalog items and other system-level properties are not part of this client- at least not yet.

The aim here is to make SSRS at least slightly more interesting, accessible and useable for information consumers. This particuliar deployment of the extRS.Portal web client is connected to a report server with custom authentication (extRSAuth) which gets passed the normally required "Windows authentication" hamstring of the default SSRS installation. 

This wrapper and extension UI not only improve the user authentication experience and dynamism of SSRS parameter behaviors in the UI but also provide SSRS admins and other users with rich enterprise reporting usage and delivery data.

Enabled are the most of the features contained in Reporting Service's built-in Report Portal at /reports.

I have disabled some things like deleting and uploading items for the sake of keeping my demo of the app small and simple.

The source code can be found here: https://github.com/sonrai-LLC/extRS



tickertapes

Users can search for any word or phrase and opt for news of common financial market indexes


Originally implemented as "Twickertapes" and utilizing the original Twitter API (v2.0), this app is merely a demonstration of what can be done with a little text input, and API (the Google News API) and scrolling text and ASCII art.

You can find it on the web here: https://tickertapes.net

 

An SSRS IFrame/CORS infinite redirect loop error and a quick and easy solution

The redirect loop looks like this and, in Edge, will display the error message: "[domain] redirected you too many times"

If you are trying to render the SSRS ReportViewer control within an <iframe>, you may run into a CORS issue that manifests in a series of 302 (Found) responses and an infinite redirect loop between ReportViewer control (ReportViewer.aspx) and Logon.aspx.

As of SSRS 2022, without an explicit instruction to allow CORS, ReportViewer cannot be rendered within an <iframe> on an origin different than the origin of the report server.

If you are using custom authentication, the solution is easy enough. Just add cookieSameSite="None" and enableCrossAppRedirects="true" to the authentication <forms> tag in the report server's web.config.

 <authentication mode="Forms">  
   <forms loginUrl="logon.aspx" name="sqlAuthCookie" cookieSameSite="None" timeout="60" path="/" enableCrossAppRedirects="true" requireSSL="true">  
   </forms>   
 </authentication>  

You may also need to enable CORS in your client app. In ASP.NET Core 8, this can be achieved through the following in your application startup code:
 app.UseCors(builder => builder  
 .WithOrigins("https://localhost", "https://[domain]")  
 .AllowAnyMethod()  
 .AllowAnyHeader());  

XML vs. JSON for API requests and responses

XML is a document-centric data format and data processing language and stands for, "Extensible Markup Language".

JSON is a lighter-weight messaging-centric data format and stands for, "JavaScript Object Notation".

A question that often comes up in discussing APIs is whether an endpoint should accept and return JSON (increasingly the standard) instead of XML (was the old standard with XML-based SOAP services). If I have just 1 opinion to give, I would say that it is important for an API response model to be format agnostic (at least in regards to JSON vs. XML). Why not have the API accept and return both types and give clients the choice of either JSON or XML?

If your web service is so complex and has so much XML-specific dependencies, you may want to look into simplifying your model. None of the heavy lifting (of database communication, of data configuration and security information, data processing, etc.) should ever be done with dependencies from our sent and received XML and JSON GETs and POSTs.

That should be done on the server side, as much as is possible.


Here is an illustration from xml.com which describes the functionality equivalents in each format


In reality, if we model our program entities with easily understandable properties and no embedded config in the XML (this was the norm for SOAP), we will have no problem serializing our model objects into XML or JSON- losing no information in the process.


"XML is a data format, AND it is a language also. It has many powerful features that make it much more than a simple data format for data interchange. e.g., XPath, attributes and namespaces, XML schema and XSLT, etc. All these features have been the main reasons behind XML popularity.

JSON’s purpose is solely structured data interchange. It serves this purpose by directly representing objects, arrays, numbers, strings, and booleans. When meta-data and document markup is not a requirement, always use JSON." 
 -Lokesh Gupta


JSON vs. XML: A simple example

 {   
  "json": {   
   "myAppSettings": {   
   "appSettingDatabaseUri": "https://mysecuredatabaseserver.net"   
   }   
  }   
 }   
 <xml>  
 <myAppSettings>  
 <appSettingDatabaseUri>https://mysecuredatabaseserver.net</appSettingDatabaseUri>  
 </myAppSettings>  
 </xml>


It is a matter of preference for developers: For some, XML reads like a book and XML element structure and hierarchy makes for more easily understood data messages. For others, XML is overload, JSON is far more lightweight (ie. faster), and JSON is more readable (esp. for anyone knee-deep in NodeJS development where JSON is the default format for everything). JSON is also inherently parse-able with JavaScript. 

With XML it is not so. And XML's additional concerns (document media type mixing, security, channel bindings (remember WCF? 😅)), looping and various other other namespace and xsl/xslt configurations and transformations)- make it seem far more unwieldly than it actually is.

Aim for offering your API clients the option of JSON or XML for response type via "Accept: application/xml" or "Accept: application/json" header requests. Happy Parsing!

References: 

https://restfulapi.net/json-vs-xml

https://www.guru99.com/json-vs-xml-difference.html


History of XML

  • XML was also derived from SGML.
  • Version 1.0 of XML was released in February 1998.
  • Jan 2001:IETF Proposed Standard: XML Media Types
  • XML is the Extensible Markup Language.
  • 1970: Charles Goldfarb, Ed Mosher, and Ray Lorie invented GML
  • The development of XML started in the year 1996 at Sun Microsystem

History of JSON

  • Douglas Crockford specified the JSON format in the early 2000s.
  • The official website was launched in 2002.
  • In December 2005, Yahoo! starts offering some of its web services in JSON.
  • JSON became an ECMA international standard in 2013.
  • The most updated JSON format standard was published in 2017.

Power BI and SSRS - A complimentary symbiosis

 

"Generally, Power BI paginated reports (SSRS reports) are optimized for printing, or PDF generation. Power BI reports are optimized for exploration and interactivity."

 

So, what really IS up with MSBI these days? Is SSRS getting shuttered? PBI has paginated RS-like reports, but not a lot of the other features SSRS provides. Microsoft marketing will continue to hate people like me who would go to great lengths to help keep alive an aging reporting technology. But the thing is- SSRS simply does the job for the vast majority of reporting use cases. In the last 18-20 years, there have been no major advancements in scheduling and snapshotting and otherwise caching and managing and distributing electronic information. And SSRS has all of that built in.


Similar BI products- but they'll both be around for a while, with PBI eventually subsuming all the most useful SSRS tech


Nearly all of the advancements in reporting technology have come on the presentation and client-side. We can now create beautiful ad-hoc analysis and brilliantly composed interactive charts and other data presentations. But this all comes with a not insignificant price ($10 per user/month). And beyond the price, much like Azure SQL (vs. a genuine "Microsoft SQL Server" VM) and the extremely limited Azure SQL Workbench (vs. SSMS), there is a lot that Power BI cannot do well or at all.

You may have noticed the built-in SSRS reports in SSMS19's new QueryStore feature. These are very useful reports that can give DBAs an idea of how queries and being processed and what processes are consuming the most CPU. And it is a good example of a company "eating its own dog food".

I've seen SSRS installations that contained thousands of reports representing trillions of dollars of value, categorized and summarized with realtime security ownership, counterparty, price and other core trade data. Several of these business-ctitical reports had scheduled delivery and were cached and snapshotted programmatically.

Little old SSRS is a quiet but reliable business data spartan. To my surprise it is actually quite popular in the investment banking industries where stock valuations and company summaries on reports are a big part of the lifeblood that drives investment banking decision making.


A professionally developed PBI report looks and behaves more like an interactive BI dashboard- this is a good example of a good PBI report


And with a little bit of customization magic via things like ExtRSAuth, ExtRSNET48, ExtRS and other RS extension tools, SSRS and Power BI can be tailored-made to serve as a uniquely effective symbioses of print formatted, scheduled and data-driven management reports (SSRS) and ad-hoc or OLAP-based interactive data analysis charts with data visualizations (Power BI).

To answer the question of "when will SSRS be end of life?", I would say that SSRS isn't going away anytime soon. Microsoft has decided to combine SSRS and PBI (RS, .rdl reports are the "Paginated Reports" in PBI) in a way that serves both platforms. The PBI3.0 REST API indicates as much as the combined SSRS/PBI API offers a plethora of functionality that .NET developers can use to get the best of both worlds (SSRS and Power BI) and customize RS and/or PBI dashboards to support unique business processes. 


A print-formatted SSRS report- great for standardized, templated data reporting


The choice in what tool or tools you will enrich your printed reports and data visualizations with, is yours. Keep in mind that many organizations make use of both- with SSRS getting equal to more attention than PBI even to this day- not only because of the huge, global SSRS install base with all of these currently running SSRS reports- including many which support critical business and governmental processes across the globe. But also because Power BI requires a monthly subscription fee :( . Freeware seems to be slowly dying. Let's hope things change with the next version of SQL Server and maybe we'll get free* PBI. 

SQL Server 2025? Happy reporting and data analyzing. Always remember that PBI and SSRS serve different organizational needs- ad-hoc analyzation of data and pixel-perfect, professional, print-ready reports, respectively.


*(at least a free "tier"? I mean c'mon MSFT..... developers want to CREATE, and MSBI data visualization creativity is dying behind that paywall)- SSRS and PBI should be free and work hand in glove. Anything less is a mistake and a gigantic missed opportunity, imho.


Reference: https://learn.microsoft.com/en-us/power-bi/guidance/migrate-ssrs-reports-to-power-bi

Dichotomies

Apropos of nothing I thought about some interesting dichotomies and wanted to share my perspective. It is interesting how often there are effectively 2 sides to a coin.


Religious Faith/Religious Dogma - Some folks live the "spirit" of the Word and resist structure vs. others, who follow strict adherence to "the letter" of the Word.

Subjective/Objective - You have here opinion and feeling (the supernatural) vs. facts and laws of nature.

Romantic/Classical - similar to the Subjective/Objective dichotomy (see also, "Zen and the Art of Motorcycle Maintenance").

Liberal/Conservative - The ideology of inclusion and change vs. the ideology of exclusion and stasis.

Conglomerate/Individual Co. - Whereas many corporations sought to obtain economies of scale through M&A in 80s, 90s and 00s, General Electric proved- (sold GE Capital, spun off GE Healthcare, GE Aviation and GE Power into 3 individual new companies)- that the conglomerate model doesn't always stand the test of time.

Thin client/Fat client - It makes no sense to be all one or all the other (an overburdened SPA app or an inflexible server-side only app). But we keep moving between one (thin client terminals in the 70s and 80s) and the other (fat client Home PCs in the 90s and 00s). And we've moved back to thin client again with Azure, AWS and GPC and the omnipresence of SaaS. But at the same time, fat-client SPA apps are as popular as ever... I guess we have fat client UIs and thin client APIs.

Imperative/Declarative code - Instructions that read like a book vs. instructions that read like a mathematical proof.

Monolithic app/Microservices - a very heavy Swiss-Army knife vs. a bunch of lightweight kitchen knives.

Software/Hardware - Recipes vs. the raw food itself.

Socialism/Capitalism - The idea that everyone is the same and should be reduced to (or propped up by) exactly such vs. the idea that everyone is incompatibly unique and should be uniquely catered to and provided maximum personal freedoms that can- like assault weapons and ammunition- come at the expense of the greater community.

Urban/Rural - Those from the densely populated cities and suburbs vs. those from the sparsely populated country towns.

Introvert/Extrovert - He or she who expends a great deal of energy socializing and finds solace in silence vs. he or she who is energized by interpersonal social connection and is not comfortable alone for extended periods of time.

Centralization/Decentralization- Putting the heart of a system at the center with dependent, often necessarily-generic/homogenized nodes vs. putting the heart on the nodes themselves, at the expense of (losing) ease of simultaneous node (state) synchronization and a "single source of truth/golden records".



Charts Suggestions - "A Chart Chooser" (edited)

Just because you can do things with "non-data ink"* does not mean you should do things with "non data ink". Below is a useful guide to charts for presenting different perspectives of data. I have crossed out the ones that are unanimously decried in the data visualization community for having confusing elements that do not effectively communicate the meaning of data.

Bar charts. Line charts. Dot or scatter plots. And bullet charts for really communicating a lot of information in a small space. That's really all you need.

 

The "Thought-Starter" above, edited to cross out (in red) all useless and confusing visuals to AVOID


Bullet charts are highly effective at displaying sub-ranges within a spectrum; the sole max line communicates threshold or "target" values



Focus on ensuring that every piece of ink in your chart is conveying some kind of useful information. If not, delete it. Blank space is better than distracting ink.

If you need an example of "distracting, non-data ink" then look no further than the following almost headache-inducing example:

I imagine the author of this chart was more interested in the art than the meaning of the data; this is a really bad data visualization



Another simple yet very powerful data visualization technique is to show the same type of chart repeated for contiguous time intervals or for different groups at the same point in time. An example is the following small multiples chart on alcohol consumption in different countries:


Small multiples charts really highlight the outliers (S. Korea?!!)


For more on the use of small multiples for effective data visualizations, I show more examples here: kpitsimpl: Small Multiples (are awesome) a while back. KEEP IT SIMPL.





ExtRSAuth for Custom SSRS Authentication (works w/newest SSRS version 16.0.8)



Fortunately, ExtRSAuth code doesn't require any updates to work with MSSQL SSRS 2022


ExtRSAuth for custom SSRS security 

This assembly, forked from the Microsoft Custom Security Sample extends and improves custom authentication to allow for mechanisms other than user/pwd credential check and to offer a seamless pass-thru of the Login page if something present in the HttpRequest verifies that user is already authenticated. For instance, the user already has an app token from an app that communicates with the report server, and you require the communications with the report server to not involve any intermediate screen or login UI. The user just wants to auth as fast as possible and get to their report, right?


What does ExtRSAuth do to authenticate SSRS user connections?

For direct URL report server access, the default here is to allow local connections, which grants Admin rights for any local requests. If the SSRS request is external, a fallback option accepts an AES 128-bit encrypted querystring from the calling app, and the application, if decryption works, is authenticated and allowed to communicate using a read-only SSRS user; any exception thrown indicates the request is neither from a local connection nor a secure request from the external app.


ExtRSAuth gives SSRS environments the freedom from MS Active Directory that they deserve



To secure the built-in SSRS REST API v2.0 access, you can simply customize the LogonUser() and VerifyPassword() methods in AutneticationExtension.cs and AuthenticationUtilities.cs, respectively:


If you don't perform a security check here, the SSRS REST API will be open to anyone who knows the SSRS admin username


As you can see, this is but one of many approaches we can take with ExtRSAuth in an SSRS-connected application or business environment. Any type and granularity of custom authentication and level of authorization is possible. The only ingredient needed is a .NET developer or developers willing to customize a pretty basic .NET security model.

Real-world applications

This Custom Auth assembly has been tested with (1) several .NET Framework 4.8 and .NET 5, 6, 7 and 8 web and mobile applications, (2) with the SSRS API and all its operations, (3) with the SSRS /ReportServer and the /Reports management web interface as well as (4) Visual Studio 2022 Reporting Services projects (report designers can seamlessly deploy Report Server projects from VS to the Report Server with ExtRSAuth).


After running InitalizeExtRSAuth.ps1, a successful installation will output the above


Demonstration

This YouTube explainer video describes the SSRS external user authentication problem that ExtRSAuth addresses.

Requirements
This plug-in relies on SSRS (2016 or later), and a report server configuration as described in Microsoft's Reporting Services Custom Security Sample

-Replace [your_sym_encr_key] with your symmetric encryption key. Clients can encrypt SSRS URL access querystring with Sonrai.ExtRSAuth.Excryption.Encrypt() or a similar 128-bit AES encryption implementation, or modify Encrypt() with any encryption algorithm and key and block sizes.

Clone it, customize it further (or not) and get started today: https://github.com/sonrai-LLC/ExtRSAuth

Inflation 2022


Funny comic that is less funny in the midst of our high inflation


Inflation is defined as the rate of increase in prices over a given period of time.

To understand inflation we need to understand how money supply is controlled. The best way to understand fiat* currency and the necessary independant role of the Federal Reserve bank and their actions to influence overall money supply is to picture a small town with X amount of money circulating in the town.

The people in the town work hard to create goods and services that did not exist before their work. This new work is how money or value- temporary (services) or permanent (goods)- can literally be "created" without any tangible (silver, gold, etc.) backing. The town can create its own paper to represent the incremental value they create each year.

When an economy's overall value expands (like postive GDP growth rate for nations), the money supply needs to be increased to reflect the increased value available for purchase or lease.

This "town money" is backed by the stuff produced by the townspeople each year, which, aside from a rare bad (crops, etc.) year every few decades, only ever increases. The townspeople have large, growing families** so the annual productivity gains and corresponding increases in the town's money supply can be counted on for generations*** and become part of the town's normal economy.

If all of the sudden, the town had to completely stop producing goods and services, the town's money (which is tied directly to the productivity of the town) will lose its purchasing power as each of the precious few remaining (already-produced) goods and services in the town become more and more scarce and expensive.




Recent inflation readings have been "not good" to put it mildly


The role of the Fed is to print money when positive GDP growth warrants it, to set base interest rates which banks use as the rate to lend money to each other, and to perform what are called "open market operations" where the Fed buys or sells bonds to influence the U.S. economy.

In normal times the Fed can cool an overheating economy by selling bonds and raising the base lending rates (which makes it more expensive to get money and slows spending), and can stimulate a flagging economy by buying bonds with cash which injects money into circulation (as was the case in the 2008 Recession and COVID stimulus grants, checks and other stimulus programs of the past 14 or so years).

In the national and world economies during a so-called "credit event" (ie. overvalued, "funny money" being found out, underlying assets being devalued, causing debts to default), the government can quite literally 'create' money by printing physical dollars and updating corresponding central digital banking ledgers in order to make up for the loss of liquidity in markets- this is the scenario when the Fed needs to add money to the money supply to stimulate the depressed labor market and demand that follows a credit event.

The problem lies in the lack of added value when the Fed prints money or buys debt securities far in excess of the actual value available in the economy. If there is nothing backing the new money, the new money represents a devaluing of currency because the overall money supply has changed, but the value available in the marketplace has not changed. This is why large loans typically need to be "secured" via a physical asset backing the loan- like a dwelling place, valuable land or an expensive vehicle.

A government's central bank can provide unbacked stimulus temporarily, but only temporarily. Eventually, any mismatch between value available and capital available will rebalance.

The problem right now is that we have to remove money from circulation to get inflation under control. And nobody ever wants the pain that is necessary for valuations and corresponding debts to "slow down" and reflect more realistic, tangible, clearly derived and described values. Fed actions to remove money from the money supply and hike basis interest rates means that money will become much more expensive to obtain and prices will remain bid up by the wealthiest spenders, until the world economy has been drained of all this excess money in circulation.



Americans would need an average annual pay raise of around 7-8% at this point simply to keep up with current inflation



Banks and large companies have been overdosing on easy (near-0 percent interest rate) money since 2008. Long-term high inflation is often the result of this kind of "cheap/easy money" monetary policy. When money can be had for the absurdly low rates as it was until very recently, an economy like that of the aforementioned ficticious small town, becomes awash in "too much money, chasing after too few goods and services".

In other words, people on average have a lot of money right now- but they cannot find enough things to spend it on. And because money has been raining on our economy for so long without an equal increase in the corresponding value backing the money, there simply aren't enough goods and services to go around for everyone (ie. meet the demand).

So what happens when once-affordable products and services become scarce like this? Naturally, people who possess the most money in the economy bid prices up as high as they are willing to afford, much like like an auction or eBay, and this raises prices for everyone.


Inflation was responsible for the rout of Reagan vs. Carter; no incumbent will win with 10% inflation (*cough*)


This excess supply of money- the principal cause of our current inflation and coming Recession- is accompanied by the inflationary pressures of China COVID lockdowns and Russia's invasion of Ukraine maligning global supply chains. 

So not only are there less things and services available to purchase, but higher energy prices (sanctions on Russian oil) and slower delivery of supplies and goods to/from China are raising the prices of goods and services even higher than would happen in a "normal/historical" high inflationary period.

There are many takes on our current elevated inflation, but they all boil down to the same conculsion: when a normal balance of money in circulation and the goods and services available for purchase returns, inflation will return to the normal/expected ~2% year over year.

Naked captialism can be ugly and often reduces virtually all human value to metrics. We can do better, be more prudent about assesing risk, while also being more sensible (and equitable) in the distribution of new capital, and must.

This system is all we've got. And nobody wants to keep going on this terrifying plane ride every 8-10 years.









*dollars backed not by tangible precious metals like gold, silver or another scarce commodity resource but rather, backed by the "full faith and credit of the U.S. government". This "full faith and credit", is defined as "trust and reputation". But in regards to dollars and dollar-derived assets, the full faith and credit of the U.S. government includes its ability to collect massive amounts of revenue through taxes, as well as law enforcement and the military who protect assets from theft or seizure. Though this malleable "faith" backing is ambigous (it is a bit like "Goodwill" in business accounting), it's not for nothing.

**this is why a mid-to-high-growth U.S. economy is dependant on increases in nominal birthrates (workforce increases) and immigration (workforce increases). Otherwise there won't be enough people to provide for the demand of the older, retiring workforce and increasing wealth. That currently we've produced a series of American generations who are not as well off as their parents were is clear evidence of this already happening. There is simply demand outstripping supply at every turn.

***if this all seems like a gigantic institutionalized Ponzi scheme, that's because in a way, it is. 😐

                        MORE
                             moremore
                     moremoremoremore
              moremoremoremomoremore
        moremoremoremoremoremoremore
    moremoremoremoremoremoremoremore
  absolutelynothingabsolutelynothingabsolute...
       [uh-oh-we-can't-pay-the-ppl-above-us]


References:





FP Lite

Less filling?


The Imperative/Object vs. Functional/Declarative paradigm has confused many a beginning developer; even the seasoned developer writing this article. Here I attempt to shed some light.

Why FP? For starters, controlling object hierarchies and keeping track of object state is one of the most difficult challenges of OO design. Additionally, we live in a world of interactions, not models in a fixed state. FP aims to simplify those interactions by reducing the scope of what could possibly go wrong. FP aims to create fixed, 100%-predictable-at-runtime operations vs. the polymorphism and dynamic composition so often seen in OO.



Huh? 😕


Many an FP project was begun as a pure NodeJS project, only to fall back on TypeScript when the need for type-checking and compilation became too obvious to ignore. Why is this?

There seems to be a push and pull between the flexibility that JS offers (the kind of "super dynamism" inherent in the JS prototyping model) vs. the adherence to SOLID principals that typed modeling with pre-compilation provides. When implemented reasonably and practically, FP can greatly simplify a project or at least simplify some of its lambda-izeable code.


The Virtues of FP

  • FP's versatility lies in its ability to abbreviate logical statements with lambda calculus, thereby shrinking the solution space and reducing cognitive load for the developer (for enthusiastic students of Calculus theorems, at least 😃)
  • Also => x = > x.Happiness


The Virtues of OOP

  • Since the release of Smalltalk, has seen widely successful iterations in Java and C#
  • Its footprint is vast and its legacy rich (C# 12, .NET 6, Java 18)
  • OO development is still the best way humans have to model entities and complex hierarchies; FP does not magically solve or remove the OO state-tracking problem (and actually makes debugging less transparent in many cases), it merely pushes it to another corner of the development space (the function implementations).

When Alan Kay coined the term “Object Oriented Programming” in the 1960s he had a background in biology and was attempting to make computer programs communicate the same way living cells do. Kay’s big idea was to have independent programs (cells) communicate by sending messages to each other. The state of the independent programs would never be shared with the outside world (encapsulation). Said Kay years later after OO had achieved market dominance, "I’m sorry that I long ago coined the term “objects” for this topic because it gets many people to focus on the lesser idea. The big idea is messaging.".


FP simplified: function interaction for behavior vs. object interaction for behavior.


Indeed.

In my experience with it, FP yielded successful programs, but never did I feel that I had intimate knowledge of how things were working outside of the modules I implemented or modified because the code was not inherently readable- to me, that is. When things go wrong, that can be a problem no matter how predictable the (wrong) results are. Certainly, with time and exposure, an OO developer can learn to reason about FP code equally if not more effectively than OO code; a lot of this debate is, at its core, a preference of style and mental model and has no right/wrong answer.

Oftentimes, the best decision is to use a little bit of both OO and FP. With the introduction of lambda statements in C#, .NET became a veritable proving ground for any software developers who knew they could greatly simplify .NET code by flattening all those convoluted and bug-prone foreach and for loops into simple lambda maps. Making certain properties readonly in an OO project will shine a (useful) bright light on just how much your code has spiraled out of control... Know?

For those who claim that FP will magically prevent all bugs, don't even try obfuscating the truth:

You cannot run away from or completely shield yourself from complexity.

Not by declaring, "we are an FP team now, as such all behavior must be function interactions, no mutable state anywhere!". That isn't what FP promises. Implemented practically however, FP is a far superior solution for certain use cases and certain development teams (think- very math-oriented software engineers).



In this instance, the case for FP is clear; in .NET we can use FP within OO via Lamba fn()'s which achieves the simpler code on the right


FP Drawbacks and warnings

  • Sometimes not feasible when trying to model application entities and their behaviors
  • The same brevity that some developers love in FP can cause readability issues for others
  • Often the tests for FP become so large that any gain in brevity is kind of negated

OO Drawbacks and warnings

  • Type modeling can lead to an excessive number of types, increasing program size and complexity
  • Bugs related to state mutations as a result of function/object side-effects
  • More problem space for things to go wrong


More than any of the three issues above, the biggest problem I have with functional programming (in the Node ecosystem at least) is that the entire supply chain is open source and that can lead to neglected, abandoned, sabotaged or broken projects without warning. Love or Hate MS, they make sure (some) bugs are fixed and libraries are maintained.

When a NodeJS package/module that you depend on becomes obsolete or is compromised it forces teams to find an alternative and in the worst cases- entirely rebuild functionality with completely new dependencies.

And related to this, there is a bit too much over-distribution/splintering of source code going on with many (most?) FP Node projects. 20 dependencies in a project, sure. But I've worked on Node projects, that only involved a few functions, which relied on hundreds of NPM package dependencies, many of which were several versions out of date (and couldn't be updated to current, because, well y'know, it's a breaking Node package update! Weeeeee... 😃 (Ç'est la vie nous vivons))


😆


On the flipside, there is something immensely liberating about having not only a completely commercial-free OS like Linux, but also, free IDEs and a worldwide bazaar of NPM packages that fulfill virtually any development requirement that one might ever encounter.

FP style emphasizes ensuring your code has "high signal-to-noise ratio"; when things break, we want to know where it hurts and why, immediately. To paraphrase Kent Beck in his book on unit testing, stack traces of poorly written integration tests and production failures often tell us there is a problem in our application, "but, where oh where?".

If we keep our abstractions tightly wound and limited (and oh boy does FP do that in spades), we will immediately become aware of any issue in our program and know exactly what line or lines of code are responsible.


The "core" of FP is in its ability to do several things at once (parallelism- prime domain for large list processing)


Summary

FP is only going to gain a greater and greater share of the software development world as more and more mathematically inclined developers join the ranks of the IT industry. To be sure: Java and .NET have a lot of great stuff on Maven and Nuget, respectively.


More than anything else, FP is a mindset, or a "mental model" of how program messaging is defined.


You can even collaborate on open-source project in .NET and Java and even C++ in in this day and age of 2022. In fact, MS Documentation and a large portion of the MS code library is now open source.

However, no open-source communities are as active as the ones you will find for FP (Lisp, Go, NodeJS, Haskell, Clojure, F#). Whatever Lisp or Haskell may lack in worldwide development footprint they make up for in developer enthusiasm and evangelizing the FP mindset and how it can lead to more concise and more "easier to test and reason about" software application code.

More than anything else, FP is a mindset, or a "mental model" of how program messaging is defined.

More specifically, it is the mindset of passing around functions-as-arguments-to-other-functions (like recursion, but for everything in the program) instead of object composition, state tracking and function arguments. This not only eliminates the need for state tracking and therefore much of the problem space, but it also opens up the application code to mathematical proof modeling of the application behavior requirements- a huge productivity gain if your development team is a heavily math-oriented bunch.


OO vs. FP


Lisp is short for "list processor". Naturally, programs involving list manipulation and uniform message processing are a very good candidate for FP code which is excellent at doing very specific tasks that involve inputs and outputs and not much else- at huge scale.

Like most software development trends, FP is not new (Lisp was developed at MIT in the late 1950s), but it is experiencing a renaissance thanks to developers rediscovering and extolling its virtues with the larger (and largely OO since 1990s) software development community.

A little bit of FP can go a long way and can clear away complexity in code like so much brushfire. But don't go using it to look for problems in existing OO code that don't exist.



FP Quotes

"You can use OO and FP at different granularity. Use OO modeling to find the right places in your application to put boundaries. Use FP techniques within those boundaries" -OOP vs FP

"OOP is not natural for the human brain, our thought process is centered around “doing” things — go for a walk, talk to a friend, eat pizza. Our brains have evolved to do things, not to organize the world into complex hierarchies of abstract objects." -FP essay

OOP does not have enough constraints in place that would prevent bad programmers from doing too much damage. " -Ilya Suzdalnitzski 

"Encapsulation is the trojan horse of OOP. It is actually a glorified global mutable state" -OO, the trillion dollar disaster



References:

https://betterprogramming.pub/object-oriented-programming-the-trillion-dollar-disaster-92a4b666c7c7

https://dev.to/bhaveshdaswani93/oop-vs-fp-with-javascript-39jf

https://betterprogramming.pub/object-oriented-programming-the-trillion-dollar-disaster-92a4b666c7c7

https://www.lihaoyi.com/post/WhatsFunctionalProgrammingAllAbout.html#the-core-of-functional-programming