Price Discrimination

Price Discrimination is the act of selling the same product or service at different prices to different buyers in order to match differing levels of demand. It is used to ensure business from lower demand markets and earn the maximum possible profit from higher demand markets. This can be illustrated in the case of your grandparent or child getting a discount at the movie theater because they tend to have a lower demand than the average moviegoer.

Examples of Price Discrimination

"Price Equilibrium" (PE) is the price point at which a Supply Curve and Demand Curve intersect. Any price charged above the PE will result in more profit (seller surplus) and any price below PE will result in less profit (consumer surplus, missed opportunity by seller). Pricing products and services is done through the process known as Marginal Cost Analysis.

Price discrimination can be quite problematic when it is applied on the basis of ethnicity or socioeconomic status.

Just a matter of risk data? Or a racially biased algorithm used by banks?

Although the discriminatory practice known as "redlining" has been outlawed for over 50 years, banks continue to charge higher mortgage rates to non-white consumers. From the bank's perspective they would argue that it is coincidence and simply reflects the consumer's credit and a higher risk they are taking on. Others would argue that minority loan-seekers are being priced out of the American Dream because of the color of their skin.

Gas stations tend to have higher-than-average prices in low income areas because the customers in these areas have less nearby options, are often less mobile and are in general less discriminating than shoppers in a wealthy suburb who can leverage their environment of more competition, their mobility and in turn be more selective in their consuming habits (which is to say, more likely to sharply increase or decrease demand if a price is not in equilibrium).


Now in some cases price discrimination makes perfect sense. Take for example a hardware store in Arizona and a hardware store in Wisconsin who are both selling snowblowers. The store in Arizona is almost assuredly going to sell their snowblowers for far less as the demand for snowblowers is very low in that area of the country.

But in Wisconsin, there is virtually year-round demand as it snows every year, and so the Wisconsin store is likely to charge much more than the Arizona store. Furthermore, even within Wisconsin, stores will charge less for snowblowers in the summer than in the winter (when demand is higher).

Companies differentiate prices to match demand for different types of consumers

With the exception of 4th degree price discrimination, when a price is different for different types of consumption of the same product or service- it is because demand for that product or service is different among consumers and so companies set the prices accordingly.

TwickrTape

Real-time scrolling Tweets related to financial news alerts and updates. Just enter the Twitter handle you want to see a marquee of live streaming tweets from.

TL;DR - the working app is here: https://twickrtape.azurewebsites.net
..znd here on Google Play Store: https://play.google.com/store/apps/details?id=io.cordova.twicktape

Use button in lower-left to toggle different Twitter handles

The code is a hodgepodge of various references and vestiges of past personal projects. My initial aim was to get something working end-to-end as I envisioned, and I was able to achieve that pretty easily through the Twitter API and .NET.

The process is standard API stuff- first you authenticate and receive a token which you can then subsequently pass to prove that your requests are valid. Then, using simple (REST API) HTTP GET, we get the content (Twitter Timeline information) that we are interested in.

The gist of the code is .NET C# (an ASP.NET Controller method) and listed below:

  public ActionResult GetTwickr(string handle = "business")  
     {  
       // Set your own keys and screen name  
       var oAuthConsumerKey = "XXXXXXXXXXXXX"; // "API key";  
       var oAuthConsumerSecret = "XXXXXXXXXXXXXXXXXXXXX"; // "API secret key";  
       var oAuthUrl = "https://api.twitter.com/oauth2/token";  
       var screenname = "@" + handle; // default Twitter display current status  
       // Authenticate  
       var authHeaderFormat = "Basic {0}";  
       var authHeader = string.Format(authHeaderFormat,  
         Convert.ToBase64String(Encoding.UTF8.GetBytes(Uri.EscapeDataString(oAuthConsumerKey) + ":" +  
         Uri.EscapeDataString((oAuthConsumerSecret)))  
       ));  
       var postBody = "grant_type=client_credentials";  
       HttpWebRequest authRequest = (HttpWebRequest)WebRequest.Create(oAuthUrl);  
       authRequest.Headers.Add("Authorization", authHeader);  
       authRequest.Method = "POST";  
       authRequest.ContentType = "application/x-www-form-urlencoded;charset=UTF-8";  
       authRequest.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;  
       using (Stream stream = authRequest.GetRequestStream())  
       {  
         byte[] content = ASCIIEncoding.ASCII.GetBytes(postBody);  
         stream.Write(content, 0, content.Length);  
       }  
       authRequest.Headers.Add("Accept-Encoding", "gzip");  
       WebResponse authResponse = authRequest.GetResponse();  
       // deserialize into an object  
       TwitAuthenticateResponse twitAuthResponse;  
       using (authResponse)  
       {  
         using (var reader = new StreamReader(authResponse.GetResponseStream()))  
         {  
           System.Web.Script.Serialization.JavaScriptSerializer js = new System.Web.Script.Serialization.JavaScriptSerializer();  
           var objectText = reader.ReadToEnd();  
           twitAuthResponse = JsonConvert.DeserializeObject<TwitAuthenticateResponse>(objectText);  
         }  
       }  
       try  
       {  
         // Get timeline info  
         var timelineFormat = "https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name={0}&include_rts=1&exclude_replies=1&count=5";  
         var timelineUrl = string.Format(timelineFormat, screenname);  
         HttpWebRequest timeLineRequest = (HttpWebRequest)WebRequest.Create(timelineUrl);  
         var timelineHeaderFormat = "{0} {1}";  
         timeLineRequest.Headers.Add("Authorization", string.Format(timelineHeaderFormat, twitAuthResponse.token_type, twitAuthResponse.access_token));  
         timeLineRequest.Method = "GET";  
         WebResponse timeLineResponse = timeLineRequest.GetResponse();  
         var timeLineJson = string.Empty;  
         string scrolltxt = string.Empty;  
         using (timeLineResponse)  
         {  
           using (var reader = new StreamReader(timeLineResponse.GetResponseStream()))  
           {  
             timeLineJson = reader.ReadToEnd();  
           }  
         }  
         // deserialize into an object  
         dynamic obj = JsonConvert.DeserializeObject(timeLineJson);  
         foreach (var r in obj.Root)  
         {  
           scrolltxt += " ***** " + r.text;  
         }  
         var model = new LoggedInUserTimelineViewModel { TimelineContent = scrolltxt, Handle = handle };  
         return View("TwickrMain", model);  
       }  
       catch (Exception e) { }  
       return View("TwickrMain");  
     }  

Next, using Newtonsoft,Json JsonConvert() function, we deserialize the JSON response and zero in on the 'text' property of each entity (tweet) in the array of JSON results (tweets).

And finally, using some basic Bootstrap HTML, JavaScript and CSS we are able to wire up a very simple, but effective app that interacts with the Twitter API in real-time.

Reference: https://developer.twitter.com/en/docs/tweets/timelines/api-reference/get-statuses-user_timeline.html

Using PowerShell and youtube-dl to automate grunt work

Earlier this evening I was tasked by my wife to create audio files from some (DRM-free) YouTube content. I knew how to do this via a manual process, however after the 5th manual file conversion I started to remember the youtube-dl Python project and figured there has to be an easier and quicker way to do this file conversion grunt work.

And sure enough... youtube-dl (with some help from PowerShell) does the trick.

Expected input and output

For starters (we are assuming you have already installed Python 3), you will need to have pip install two modules:

 pip install youtube-dl  
 pip install ffmpeg  

Next, create a youtube-uris.txt with the URI of each song you want to convert on separate lines of the file.

Finally, you can use the following PowerShell script to copy the song URIs into a string array which is then iterated through, converting each array item into a playable audio file (mp4), saved in whatever directory you run the script from.

 [string[]]$arrayFromFile = Get-Content -Path 'C:\sand\youtube-dl-sandbox\youtube-uris.txt'  
 foreach ($uri in $arrayFromFile) {  
   youtube-dl -f 140 $uri  
  }  

That's it. This script can be used for automating other manual/repetitive tasks requiring a procedure to be run against every item in a large list.


Reference: https://github.com/rg3/youtube-dl

Generate Secure Machine Key Section for Web.config via PowerShell

Machine Keys are used in ASP.NET for securing machines that are part of a web farm as well as for sharing encrypted application session and state information.

This PowerShell script (function) can be called (once run and saved to your PS session) via,

"PS C:\: Generate-MachineKey"


With the output from this PS function, you can copy and paste to your web.config ie:
 <configuration>  
  <system.web>  
   <machineKey ... />  
  </system.web>  
 </configuration>  

Generate-MachineKey function definition/PS script
 # Generates a <machineKey> element that can be copied + pasted into a Web.config file.  
 function Generate-MachineKey {  
  [CmdletBinding()]  
  param (  
   [ValidateSet("AES", "DES", "3DES")]  
   [string]$decryptionAlgorithm = 'AES',  
   [ValidateSet("MD5", "SHA1", "HMACSHA256", "HMACSHA384", "HMACSHA512")]  
   [string]$validationAlgorithm = 'HMACSHA256'  
  )  
  process {  
   function BinaryToHex {  
     [CmdLetBinding()]  
     param($bytes)  
     process {  
       $builder = new-object System.Text.StringBuilder  
       foreach ($b in $bytes) {  
        $builder = $builder.AppendFormat([System.Globalization.CultureInfo]::InvariantCulture, "{0:X2}", $b)  
       }  
       $builder  
     }  
   }  
   switch ($decryptionAlgorithm) {  
    "AES" { $decryptionObject = new-object System.Security.Cryptography.AesCryptoServiceProvider }  
    "DES" { $decryptionObject = new-object System.Security.Cryptography.DESCryptoServiceProvider }  
    "3DES" { $decryptionObject = new-object System.Security.Cryptography.TripleDESCryptoServiceProvider }  
   }  
   $decryptionObject.GenerateKey()  
   $decryptionKey = BinaryToHex($decryptionObject.Key)  
   $decryptionObject.Dispose()  
   switch ($validationAlgorithm) {  
    "MD5" { $validationObject = new-object System.Security.Cryptography.HMACMD5 }  
    "SHA1" { $validationObject = new-object System.Security.Cryptography.HMACSHA1 }  
    "HMACSHA256" { $validationObject = new-object System.Security.Cryptography.HMACSHA256 }  
    "HMACSHA385" { $validationObject = new-object System.Security.Cryptography.HMACSHA384 }  
    "HMACSHA512" { $validationObject = new-object System.Security.Cryptography.HMACSHA512 }  
   }  
   $validationKey = BinaryToHex($validationObject.Key)  
   $validationObject.Dispose()  
   [string]::Format([System.Globalization.CultureInfo]::InvariantCulture,  
    "<machineKey decryption=`"{0}`" decryptionKey=`"{1}`" validation=`"{2}`" validationKey=`"{3}`" />",  
    $decryptionAlgorithm.ToUpperInvariant(), $decryptionKey,  
    $validationAlgorithm.ToUpperInvariant(), $validationKey)  
  }  
 }  

Accessing SQL Server Data in R

Importing SQL Server data into R for analysis is pretty straightforward and simple. You will obviously need R installed on your machine. The following R code will connect to your SQL Server database (using R Studio):

 library(RODBC)  
 dbconnection <- odbcDriverConnect('driver={SQL Server};server=.;database=CLARO;trusted_connection=true')  
 initdata <- sqlQuery(dbconnection,paste('SELECT * FROM [CLARO].[dbo].[Fielding];')) 


SELECT data from a SQL Server database output in R Studio


Accessing SQL Server Data in Python

So you want to access Microsoft SQL Server from your Python script(s)?

After weeding out some long-abandoned and/or nonworking solutions, I discovered a very simple Python ODBC driver that works with virtually all SQL Servers since MSSQL 2005 called "pyodbc". 

First, you will need to install this MSSQL ODBC (13.1 or 17 should work) component on your machine in addition to installing the pyodbc driver.

Next, get the pyodbc module for Python by running this from Windows command prompt:

pip install pyodbc

Then open up a python shell using 'py' or 'python' and enter the following after editing configuration values to match your development environment:

 import pyodbc  
 cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER=localhost;DATABASE=WideWorldImporters;UID=DemoUser;PWD=123Password')  
 cursor = cnxn.cursor()  
 #Sample of a simple SELECT  
 cursor.execute("SELECT TOP (100) Comments, count(*) FROM WideWorldImporters.Sales.Orders GROUP BY Comments")  
 row = cursor.fetchone()   
 while row:   
   print(row[0] + ': ' + str(row[1]))  
   row = cursor.fetchone()  

Running this code will result in the below if you have configured everything correctly (note this example makes use of the Microsoft SQL Server demo WorldWideImporters database):


Reference: https://docs.microsoft.com/en-us/sql/connect/python/pyodbc/step-3-proof-of-concept-connecting-to-sql-using-pyodbc?view=sql-server-2017

.udl for DbConnection Check

This is a useful method to quickly check SQL credentials and/or RDBMS connectivity if working on a Windows OS. Just create a file in any editor (ie. Notepad) and save it with .udl extension which makes it a Microsoft Data Link file type. Then, right-click and inspect file Properties >> "Connection" tab.

Credit to a former colleague of mine (thanks Gene David!) who showed me how to use this simple but very useful trick.



Short Selling

Broker borrows a share, sells the share high, repurchases share at lower price ($) and returns it.


Short selling stock is the practice by which a broker borrows stock with the hope that the price of that stock will fall so that he or she can sell at a high price, (re)purchase at a lower price, and pocket the difference.

Hypothetically, let's say a trader named Joe firmly believed that Apple, Inc. was about to experience a large drop in share price. To short a single share of Apple stock, Joe would do the following:

1). Borrow a share of APPL from his portfolio, a client portfolio, or a fellow broker
2). Sell the share at the highest a price they can find before a drop (say 1 share of AAPL at current $157.76)
3). Wait for the price to fall (say APPL falls to $102.76), then purchase one share at this lower price
4). Subtract the higher price from the lower price (less fees) and return the borrowed share. 
Joe earns a cool $53 bucks from this scheme as he sold at $157.76 and bought back for just $102.76. After fees of $2.00 this is $157.76 - $102.76 -$2.00 == $53.00.

While the idea of selling something short of true value is often associated with the nefarious case of a stock "short" like this, oftentimes it is a necessity. The market always needs people on both the long end (owners/buyers) and the short end (renters/sellers) for it to work properly.

This is why banks who are on the hook with a property that they cannot sell will ultimately agree to a "short sale" (selling the home for below its fair market value) to recoup at least some of their losses.

A combination of consumer preferences and financial factors determine whether to go long or short on any kind of investment or large financial transaction.



Short selling doesn't always work in the sellers favor

Refactoring Made Obvious

Refactoring (as a term if not as a practice) gets thrown around quite a bit. Sometimes really necessary refactoring doesn't get the priority it deserves because it is hard to quantify or even visualize/sense the result of a good refactoring (it should be relatively transparent in experience to the original, any differences should be optimization or enhancement- without losing of the original functionality along the way). We can take the case of a simple JavaScript animation, for example.

Simple animating of "+" dropping across the browser screen

Long ago, I used a mobile app that had a neat UI animation feature I really liked, but it took me a while to track down just how to accomplish it. I found some good starting points on SO, and began implementing a draft on JSFiddle.net.

The animation behavior I went about creating is simply a delay of an HTML element falling from the screen (via the "topToBottom" variable you'll see below- which is just the browser screen height property) . In a cascading and sequential set of delays, each falling element is pushed an increasing distance from the left of the screen so that the elements can fall independently (otherwise you would see just one column for all of the falling +'s in the end result).

In the following 4 steps, I am going to present a very basic refactoring scenario- going from the code template references I found, to the final refactored code (css, html and .js).

(1) First I found a fiddle via the SO question reference below: http://jsfiddle.net/reWwx/4/

(2) I then changed the code to create my own draft on JSFiddle.net: http://jsfiddle.net/reWwx/539/

(3) Next, I created an .htm file with the CSS styles and JavaScript inline (not quite what we'd want to check into source control...):
 <html>  
 <head>  
 <style>  
 body {height: 600px; background-color: #999}  
 #line-3 {  
   position:absolute;  
   width:100%;  
   left:20px;  
   top:0px;  
 }  
 #line-4 {  
   position:absolute;  
   width:100%;  
   left:30px;  
   top:0px;  
 }  
 #line-5 {  
   position:absolute;  
   width:100%;  
   left:40px;  
   top:0px;  
 }  
 #line-6 {  
   position:absolute;  
   width:100%;  
   left:55px;  
   top:0px;  
 }  
 #line-7 {  
   position:absolute;  
   width:100%;  
   left:70px;  
   top:0px;  
 }  
 #line-8 {  
   position:absolute;  
   width:100%;  
   left:85px;  
   top:0px;  
 }  
 #line-9 {  
   position:absolute;  
   width:100%;  
   left:100px;  
   top:0px;  
 }  
 #line-10 {  
   position:absolute;  
   width:100%;  
   left:115px;  
   top:0px;  
 }  
 #line-11 {  
   position:absolute;  
   width:100%;  
   left:130px;  
   top:0px;  
 }  
 #line-12 {  
   position:absolute;  
   width:100%;  
   left:145px;  
   top:0px;  
 }  
 #line-13 {  
   position:absolute;  
   width:100%;  
   left:160px;  
   top:0px;  
 }  
 #line-14 {  
   position:absolute;  
   width:100%;  
   left:175px;  
   top:0px;  
 }  
 #line-15 {  
   position:absolute;  
   width:100%;  
   left:195px;  
   top:0px;  
 }  
 #line-16 {  
   position:absolute;  
   width:100%;  
   left:210px;  
   top:0px;  
 }  
 </style>  
 <script>  
 $(document).ready(function(){  
   var bodyHeight = $('body').height();  
   var footerOffsetTop = $('#line-3').offset().top;  
   var topToBottom = bodyHeight -footerOffsetTop;  
  $('#line-3').css({top:'auto',bottom:topToBottom});  
  $("#line-3").delay(100).animate({  
   bottom: '100px',  
   }, 2200);   
  $('#line-4').css({top:'auto',bottom:topToBottom});  
  $("#line-4").delay(108).animate({  
   bottom: '100px',  
   }, 2200);   
  $('#line-5').css({top:'auto',bottom:topToBottom});  
  $("#line-5").delay(145).animate({  
   bottom: '100px',  
   }, 2200);   
  $('#line-6').css({top:'auto',bottom:topToBottom});  
  $("#line-6").delay(119).animate({  
   bottom: '100px',  
   }, 2200);   
  $('#line-7').css({top:'auto',bottom:topToBottom});  
  $("#line-7").delay(115).animate({  
   bottom: '100px',  
   }, 2200);   
    $('#line-8').css({top:'auto',bottom:topToBottom});  
  $("#line-8").delay(176).animate({  
   bottom: '100px',  
   }, 2100);   
    $('#line-9').css({top:'auto',bottom:topToBottom});  
  $("#line-9").delay(13).animate({  
   bottom: '100px',  
   }, 2200);   
    $('#line-10').css({top:'auto',bottom:topToBottom});  
  $("#line-10").delay(12).animate({  
   bottom: '100px',  
   }, 2200);   
    $('#line-11').css({top:'auto',bottom:topToBottom});  
  $("#line-11").delay(11).animate({  
   bottom: '100px',  
   }, 2000);   
    $('#line-12').css({top:'auto',bottom:topToBottom});  
  $("#line-12").delay(10).animate({  
   bottom: '100px',  
   }, 2100);   
    $('#line-13').css({top:'auto',bottom:topToBottom});  
  $("#line-13").delay(11).animate({  
   bottom: '100px',  
   }, 600);   
    $('#line-14').css({top:'auto',bottom:topToBottom});  
  $("#line-14").delay(14).animate({  
   bottom: '100px',  
   }, 700);   
      $('#line-15').css({top:'auto',bottom:topToBottom});  
  $("#line-15").delay(14).animate({  
   bottom: '100px',  
   }, 800);   
      $('#line-16').css({top:'auto',bottom:topToBottom});  
  $("#line-16").delay(24).animate({  
   bottom: '100px',  
   }, 900);   
 })  
 </script>  
 </head>  
 <body>  
 <div id="line-3">+</div>  
 <div id="line-4">+</div>  
 <div id="line-5">+</div>  
 <div id="line-6">+</div>  
 <div id="line-7">+</div>  
 <div id="line-8">+</div>  
 <div id="line-9">+</div>  
 <div id="line-10">+</div>  
 <div id="line-11">+</div>  
 <div id="line-12">+</div>  
 <div id="line-13">+</div>  
 <div id="line-14">+</div>  
 <div id="line-15">+</div>  
 <div id="line-16">+</div>  
 </body>    
 </html>  
(4) And lastly I identified all of the repeating parts and made them dynamic in JavaScript, using the jQuery library to shorten much of the .js behavior:
  <html>   
  <head>   
  <style>   
  body {height: 600px; background-color: #000000; color:lime;}   
  div {   
   position:absolute;   
   top:0px;   
    width:100%;   
  }   
  </style>   
  <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>   
  <script>   
  $(document).ready(function(){   
   var base = $('#base');  
   var topToBottom = $('body').height();    
   for(i=0; i<83; i++){   
    base.append('<div id=\"line-'+i+'\" style=\"left:'+ Math.abs(i*10) +'\px">+</div>')   
    $("#line-" + i).css({top:'auto', bottom:topToBottom}).delay(100*i).animate({bottom: '100px'}, (1000));    
   }   
  })   
  </script>   
  </head>   
  <body id="base">   
  </body>    
  </html>   

The final result contains the same behavior of the draft but it eliminates repetition by dynamically generating the HTML and dynamically attaching the falling action (which is really just coordinated position and visibility property changes behind the scenes of .animate()). Eliminating duplication, standardizing for better readability, reorganization for better clarity of code purpose and finding patterns (or finding different patterns that are a better match for the task) are the key concepts in refactoring.

Try it yourself by copying the code above, saving to an .htm file and opening the file in a web browser.

Final JSFiddle result: https://jsfiddle.net/radagast/6uzypc80/5

Larger font is always fun: https://twickrtape.azurewebsites.net/Home/About

Reference: https://stackoverflow.com/questions/8518400/jquery-animate-from-css-top-to-bottom

JavaScript for Progress on Scrolling

This topic is a prime example of why I write this web log. I've seen this functionality on countless web pages and mobile apps, but for some reason it is not well explained in most of the areas you will likely first wind up when searching for instructions on how to do this (if you wound up here first great, yay me).

Potential applications of progress on scroll might be a "Terms and Conditions" view, a code walk-through, etc.

The key code is all in $(window).scroll(function(). You have these 3 main components:

  • Document Height: $(document).height() === height of "viewable" window aka viewport
  • Window Height: $(window).height() === height of document being rendered
  • Scroll Top: $(window).scrollTop() === number of pixels content is scrolled

With these values you can set the width of the progress bar (the div with the class "scroll-progress-container"). The width is a simple calculation: scrollTop / (docHeight - windowHeight) * 100.

So you can attach this logic to an anonymous function within the browser's window scroll event a la: $(window).scroll(function() { ...

And then simply assign the result of  "$(window).scrollTop() / (docHeight - windowHeight) * 100;" to the width property of your <div class="scroll-progress">.

That's it.

Code:

 <!doctype html>  
 <head>  
 <title></title>  
 <style>  
 .header-container {  
  width: 100%;  
  height: 60px;  
  background-color: white;  
  position: fixed;  
  z-index: 10;  
  top: 0;  
  left: 0;  
 }  
 .header {  
  padding-left: 10px;  
 }  
 h1 {  
  margin-top: 15px;  
 }  
 .scroll-progress-container {  
  width: 100%;  
  height: 5px;  
  background-color: white;  
  top: 55px;  
  position: absolute;  
 }  
 .scroll-progress {  
  width: 0px;  
  height: 5px;  
  background-color: purple;  
 }  
 .filler-text {  
  width: 60%;  
  margin-top: 80px;  
  margin-left: 50px;  
  position: absolute;  
 }  
 </style>  
 <script src="https://code.jquery.com/jquery-1.11.2.min.js"></script>  
 <script>  
  $(document).ready(function() {  
    var docHeight = $(document).height(),  
    windowHeight = $(window).height(),  
    scrollPercent;  
     $(window).scroll(function() {  
       scrollPercent = $(window).scrollTop() / (docHeight - windowHeight) * 100;  
       $('.scroll-progress').width(scrollPercent + '%');  
     });  
 });  
 </script>  
 </head>  
 <body>  
 <div class="header-container">  
 <div class="header">  
 <h1>Progress Scroll Bar Example</h1>  
 </div>  
 <div class="scroll-progress-container">  
 <div class="scroll-progress"></div>  
 </div>  
 </div>  
 Enter lots and lots of text here...  
 </body>  
 </html>  


References

https://www.veracode.com/blog/managing-appsec/building-scroll-progress-bar-javascript-and-jquery

https://stackoverflow.com/questions/14035819/window-height-vs-document-height

DNS Rebinding and the Fallacy of "Walled Gardens"

Attacking Private Networks from the Internet with DNS Rebinding

The well-written research article above by Brannon Dorsey is a must-read for any developer of home-integrated devices, and perhaps, all developers, even those who develop products that run within highly secure networks that, via an authenticated user- (as was done with Stuxnet)- can inadvertently intercept and execute malicious code.

Don't let script kiddies mess with your code via DNS

Essentially Mr. Dorsey discovered that smart home gadgets that are supposed to operate securely in private networks can be exploited from the outside by simply embedding scripts containing network hi-jack exploits in links that can make requests back to the client browser that appear- via dynamic IP-to-same-origin-hostname switching* (ie. "DNS Spoofing") done through a malicious DNS server -to be from a trusted, same-origin source.

ie.

Malicious hostname: exploit.net
Malicious ip address: 59.33.12.9

Victim recent request hostname: somebank.com
Victim recent request ip address: 122.76.21.19

<<Very brief DNS Hijack via a malicious DNS server>>

Malicious hostname: exploit.net
Malicious ip address as far as victim browser client is aware: 122.76.21.19

......See the problem?

He goes on to explain how entire protocols like UPnP "are built around the idea that devices on the same network can trust each other".

Remember: what appears to the browser client to be a "same-origin" request is not always actually a same-origin request. Make sure that you change the default credentials on your network router(s) and as the article above insists:

"We need developers to write software that treats local private networks as if they were hostile public networks. The idea that the local network is a safe haven is a fallacy. If we continue to believe it people are going to get hurt."

Good to remember: protocol://host:port/path?query

*DNS cache poisoning, also known as DNS spoofing, is a type of attack that exploits vulnerabilities in the domain name system (DNS) to divert Internet traffic away from legitimate servers and towards fake ones.

Visualize Hashing and Salt as Part of Password Encryption Process

The image below is a simplified and easy-to-understand illustration of how hashing and salting work. The main takeaway from this post- multiple users can have the same password, but will all have different salt values, thus making their hash result value different, and when you authenticate, you authenticate by the hash result value of your passwords, which is virtually always going to be unique for each user record:

Simple, no?

Even in the case of 2 users having the same hash result, the usernames will/should not be the same, so you still have distinct accounts, because UserID is also checked in the authentication process.

Companies increasingly (and for good data privacy reasons) do not even store the clear text textbox value you enter when you sign up for and then log into Fb, Google, Amazon, etc- they check your entered password's hash result against the hash result they have for your user/account record either from when you registered or last changed your password.

Good answer to the question you may come across, "what is the difference between salt and an IV (initialization vector)?" (TL;DR: not all IV's are salt, but salt is a kind of IV): https://security.stackexchange.com/questions/6058/is-real-salt-the-same-as-initialization-vectors


Quality Control

You should know at least the surface topics surrounding TQM (Total Quality Management) because nearly all modern businesses practice TQM strategies and tactics to reduce costs and ensure top quality.

But first, check out this old video clip of America discovering something that ironically, an American (W. Edwards Deming) exported to Japan with great success years before:

1980 NBC News Report: "If Japan Can, Why Can't We?"

So big-Q "Quality" became a bit hit and has been embedded in process management throughout the globe ever since.

I think he has a point here.

Here are some Quality buzz words that surely you've heard before:

ASQ - American Society for Quality

"Black Belt" - Ooo. Ahh. It does mean something. It means a person has passed a series of very difficult exams on statistics and statistical process control for quality based on the quantitative technics and measures originated in Japan by W. Edwards Deming.

ISO 9001 - the International standard of a Quality Management System that is used to certify that business processes follow standard process and product guidelines.

Kaizen - a long-term approach to work that systematically seeks to achieve small, incremental changes in processes in order to improve efficiency and quality.

Kanban -  a visual system for managing work as it moves through a process.

Lean - a synonym for continuous improvement through balanced efficiency gains.

Example of statistical process control using UCL and LCL boundaries and a process (Fall Rate) improving.

LCL*  - Lower Control Limit - The negative value beyond which a process is statistically unstable.

MAIC - Measure, Analyze, Improve, Control.

Service Level Agreements (SLA) - A contract between a service provider and end user that defines the expected level of service to the end user.

UCL* - Upper Control Limit - The positive value beyond which a process is statistically unstable.

Uptime - Uptime is a measure of the time a service is working and available and opposite of Downtime.

Six Sigma - a statistical approach to process improvement and quality control; sometimes defined as +/-3 three deviations for the mean ("6"), sometimes as +/-6 deviations from mean.

The table above gives you an idea of realistic process improvement numbers (66,800 == a lot of defective items)


History and W. Edwards Deming
Quality Management is a permanent organizational approach to continuous process improvement. It was successfully applied by W. Edwards Deming in post-WWII Japan. Deming's work began in August 1950 at the Hakone Convention Center in Tokyo, when Deming delivered a speech on what he called "Statistical Product Quality Administration".

He is credited with helping hasten Japanese recovery after the war and then later helping American companies embrace TQM and realize significant efficiency and quality gains.


Deming's 14 Points for Total Quality Management

*Measures such as standard deviation and other distribution-based statistics determine the LCL and UCL for a process (any process- temperature of a factory floor, time to assemble a component, download/upload speed, defects per million, etc.).

References:

http://asq.org/learn-about-quality/total-quality-management/overview/deming-points.html

https://www.quora.com/How-did-W-Edwards-Deming-influence-Japanese-manufacturing