[TestMethod]
public async Task CreateGetDeleteSubscriptionSucceeds()
{
string json = @"{
""@odata.context"": ""https://localhost/reports/api/v2.0/$metadata#Subscriptions/$entity"",
""Owner"": """ + Resources.User + @""",
""IsDataDriven"": false,
""Description"": ""string..."",
""Report"": ""/Reports/USPolls"",
""IsActive"": true,
""EventType"": ""TimedSubscription"",
""ScheduleDescription"": ""string..."",
""LastRunTime"": ""2023-04-13T15:51:04Z"",
""LastStatus"": ""string..."",
""DeliveryExtension"": ""Report Server Email"",
""LocalizedDeliveryExtensionName"": ""Email"",
""ModifiedBy"": """ + Resources.User + @""",
""ModifiedDate"": ""2023-04-13T15:51:04Z"",
""Schedule"": {
""ScheduleID"": null,
""Definition"": {
""StartDateTime"": ""2021-01-01T02:00:00-07:00"",
""EndDate"": ""0001-01-01T00:00:00Z"",
""EndDateSpecified"": false,
""Recurrence"": {
""MinuteRecurrence"": null,
""DailyRecurrence"": null,
""WeeklyRecurrence"": null,
""MonthlyRecurrence"": null,
""MonthlyDOWRecurrence"": null
}
}
},
""DataQuery"": null,
""ExtensionSettings"": {
""Extension"": ""DeliveryExtension"",
""ParameterValues"": [
{
""Name"": ""TO"",
""Value"": ""colin@sonrai.io"",
""IsValueFieldReference"": false
},
{
""Name"": ""IncludeReport"",
""Value"": ""true"",
""IsValueFieldReference"": false
},
{
""Name"": ""Subject"",
""Value"": ""true"",
""IsValueFieldReference"": false
},
{
""Name"": ""RenderFormat"",
""Value"": ""PDF"",
""IsValueFieldReference"": false
}
]
},
""ParameterValues"": []
}";
Subscription subscription = await ssrs.SaveSubscription(JsonConvert.DeserializeObject<Subscription>(json)!);
Assert.IsTrue(subscription.DeliveryExtension != null);
var getResponse = await ssrs.GetSubscription(subscription.Id.ToString()!);
Assert.IsTrue(getResponse.Id != null);
var delResp = await ssrs.DeleteSubscription(subscription.Id.ToString()!);
Assert.IsTrue(delResp);
}
extRS for useful common logic, reference data and extending SSRS
An SSRS IFrame/CORS infinite redirect loop error and a quick and easy solution
If you are trying to render the SSRS ReportViewer control within an <iframe>, you may run into a CORS issue that manifests in a series of 302 (Found) responses and an infinite redirect loop between ReportViewer control (ReportViewer.aspx) and Logon.aspx.
As of SSRS 2022, without an explicit instruction to allow CORS, ReportViewer cannot be rendered within an <iframe> on an origin different than the origin of the report server.
If you are using custom authentication, the solution is easy enough. Just add cookieSameSite="None" and enableCrossAppRedirects="true" to the authentication <forms> tag in the report server's web.config.
<authentication mode="Forms">
<forms loginUrl="logon.aspx" name="sqlAuthCookie" cookieSameSite="None" timeout="60" path="/" enableCrossAppRedirects="true" requireSSL="true">
</forms>
</authentication>
app.UseCors(builder => builder
.WithOrigins("https://localhost", "https://[domain]")
.AllowAnyMethod()
.AllowAnyHeader());
Power BI and SSRS - A complimentary symbiosis
"Generally, Power BI paginated reports (SSRS reports) are optimized for printing, or PDF generation. Power BI reports are optimized for exploration and interactivity."
So, what really IS up with MSBI these days? Is SSRS getting shuttered? PBI has paginated RS-like reports, but not a lot of the other features SSRS provides. Microsoft marketing will continue to hate people like me who would go to great lengths to help keep alive an aging reporting technology. But the thing is- SSRS simply does the job for the vast majority of reporting use cases. In the last 18-20 years, there have been no major advancements in scheduling and snapshotting and otherwise caching and managing and distributing electronic information. And SSRS has all of that built in.
Nearly all of the advancements in reporting technology have come on the presentation and client-side. We can now create beautiful ad-hoc analysis and brilliantly composed interactive charts and other data presentations. But this all comes with a not insignificant price ($10 per user/month). And beyond the price, much like Azure SQL (vs. a genuine "Microsoft SQL Server" VM) and the extremely limited Azure SQL Workbench (vs. SSMS), there is a lot that Power BI cannot do well or at all.
You may have noticed the built-in SSRS reports in SSMS19's new QueryStore feature. These are very useful reports that can give DBAs an idea of how queries and being processed and what processes are consuming the most CPU. And it is a good example of a company "eating its own dog food".
I've seen SSRS installations that contained thousands of reports representing trillions of dollars of value, categorized and summarized with realtime security ownership, counterparty, price and other core trade data. Several of these business-ctitical reports had scheduled delivery and were cached and snapshotted programmatically.
Little old SSRS is a quiet but reliable business data spartan. To my surprise it is actually quite popular in the investment banking industries where stock valuations and company summaries on reports are a big part of the lifeblood that drives investment banking decision making.
And with a little bit of customization magic via things like ExtRSAuth, ExtRSNET48, ExtRS and other RS extension tools, SSRS and Power BI can be tailored-made to serve as a uniquely effective symbioses of print formatted, scheduled and data-driven management reports (SSRS) and ad-hoc or OLAP-based interactive data analysis charts with data visualizations (Power BI).
To answer the question of "when will SSRS be end of life?", I would say that SSRS isn't going away anytime soon. Microsoft has decided to combine SSRS and PBI (RS, .rdl reports are the "Paginated Reports" in PBI) in a way that serves both platforms. The PBI3.0 REST API indicates as much as the combined SSRS/PBI API offers a plethora of functionality that .NET developers can use to get the best of both worlds (SSRS and Power BI) and customize RS and/or PBI dashboards to support unique business processes.
The choice in what tool or tools you will enrich your printed reports and data visualizations with, is yours. Keep in mind that many organizations make use of both- with SSRS getting equal to more attention than PBI even to this day- not only because of the huge, global SSRS install base with all of these currently running SSRS reports- including many which support critical business and governmental processes across the globe. But also because Power BI requires a monthly subscription fee :( . Freeware seems to be slowly dying. Let's hope things change with the next version of SQL Server and maybe we'll get free* PBI.
SQL Server 2025? Happy reporting and data analyzing. Always remember that PBI and SSRS serve different organizational needs- ad-hoc analyzation of data and pixel-perfect, professional, print-ready reports, respectively.
*(at least a free "tier"? I mean c'mon MSFT..... developers want to CREATE, and MSBI data visualization creativity is dying behind that paywall)- SSRS and PBI should be free and work hand in glove. Anything less is a mistake and a gigantic missed opportunity, imho.
Reference: https://learn.microsoft.com/en-us/power-bi/guidance/migrate-ssrs-reports-to-power-bi
ExtRSAuth for Custom SSRS Authentication (works w/newest SSRS version 16.0.8)
ExtRSAuth for custom SSRS security
This assembly, forked from the Microsoft Custom Security Sample extends and improves custom authentication to allow for mechanisms other than user/pwd credential check and to offer a seamless pass-thru of the Login page if something present in the HttpRequest verifies that user is already authenticated. For instance, the user already has an app token from an app that communicates with the report server, and you require the communications with the report server to not involve any intermediate screen or login UI. The user just wants to auth as fast as possible and get to their report, right?
Custom SSRS Authentication Extended
private void Page_Load(object sender, EventArgs e)
{
//Your secret authentication sauce goes here..
//appHash should get dynamically generated from the app calling SSRS (ideally for each request if performant enough)
//ie.
//if (CheckAuth(System.Web.HttpContext.Current.Request.Cookies["origAppHash"].ToString()))
//if (CheckAuth(System.Web.HttpContext.Current.Session["otroAppHash"].ToString()))
if (System.Web.HttpContext.Current.Request.IsLocal)
FormsAuthentication.RedirectFromLoginPage("daylite", true);
}
private bool CheckAuth(string appHash)
{
//DecodeAndCryptoChecks on appHash
return true;
}
MS' Example uses Page_Load(); presumably Page_PreLoad() or Page_Init() would also work here- 'just an HttpRequest eval
Locations - Google Maps API, ASP.NET Core and SQL Server
<script>
function initMap() {
var map = new google.maps.Map(
document.getElementById('map'),
{
center: new google.maps.LatLng(@Model.CenterLat, @Model.CenterLong),
zoom: 8
}
);
var pins = @Html.Raw(Json.Serialize(@Model.Locations));
for (var i = 0; i < pins.length; i++) {
var myLatLng = {
lat: pins[i].lat,
lng: pins[i].long
};
addMarker(myLatLng, map, pins[i]);
}
}
function addMarkerAsync(location, map) {
new google.maps.Marker({
position: location,
title: 'Home Center',
});
marker.setMap(map);
}
function addMarker(location, map, pin) {
var marker = new google.maps.Marker({
position: location,
title: '...something dyanmic...',
});
var infowindow = new google.maps.InfoWindow({
content: ''
});
function AsyncDisplayString() {
$.ajax({
type: 'GET',
url: '/Home/GetLocationModalInfo',
dataType: "HTML",
contentType: 'application/json',
traditional: true,
data: pin,
success: function (result) {
debugger;
infowindow.setContent('<div style="background-color:#000000;">' + result + '</div>');
infowindow.open(map, marker);
},
error: function (arg) {
alert('Error');
}
});
}
google.maps.event.addListener(marker, 'click', function () {
AsyncDisplayString(map, marker)
});
marker.setMap(map);
}
</script>
[HttpPost]
public IActionResult AddLocation(LocationModel location)
{
string address = location.StreetAddress1.Replace(" ", "+") + "," + location.City.Replace(" ", "+") + "," + location.State.Replace(" ", "+");
MapPoint coords = _locationService.GetLatLongFromAddress(address);
location.Lat = (decimal)coords.Latitude;
location.Long = (decimal)coords.Longitude;
using (var db = new SqlConnection(_configuration.GetConnectionString("DefaultConnection")))
{
db.Open();
string sql = @"INSERT INTO [Locations].[dbo].[Locations] ([Name], [Contact], [Email], [Website], [Phone], [StreetAddress1], [StreetAddress2], [City]"
+ ",[State], [Zip], [LocationContact], [PrimaryContact], [Notes], [Type], [Lat], [Long], [Petitions], [Flyers], [Posters], [LastPickUpDateTime], [LastOutOfStockDateTime], LastDropoffDateTime"
+ ",[AllTimeOutofStock],[Unsupportive],[VolunteerInterest])"
+ " VALUES ('" + location.Name + "','" + location.Contact + "','" + location.Email + "','" + location.Website + "','" + location.Phone + "','" + location.StreetAddress1 + "','" + location.StreetAddress1 + "','" + location.City + "'"
+ ",'" + location.State + "','" + location.Zip + "', -1, -1,'" + location.Notes + "', 1, " + location.Lat + "," + location.Long + "," + location.Petitions + "," + location.Flyers + "," + location.Posters + ",'" + location.LastPickUpDateTime + "','" + location.LastOutOfStockDateTime + "','" + location.LastDropoffDateTime + "', 0, 0, 1) " + ";";
db.Execute(sql);
}
var model = GetDefaultMapView();
model.KeyString = _configuration["MapsAPIKey"].ToString();
return View("Map", model);
}
SSRS REST API v2
The SSRS API v2 has far more functionality than v1, but they essentially work the same. You must be authenticated to the SSRS report server you are targeting (localhost in this case) to make web GET/POST requests to the API.
Once auth'd you can push and pull any useful SSRS data pretty easily to make SSRS do some pretty cool things it can't do out of the box..
You can get a collection of SSRS catalog items as in the example above (folders, reports, KPIs) by just specifying the action name, or you can select an individual item by putting the item GUID in parenthesis in the API request URL:
Common Useful SSRS API v2 Actions:
- Reports
- Datasets
- Data Sources
- Folders
- Schedules
- Subscriptions
- Comments
- KPIs
- CatalogItems (everything)
Example of a .NET Standard library with an HttpService abstacting the SSRS API calls:
namespace ExtRS
{
public class SSRSHttpService
{
const string ssrsApiURI = "https://localhost/reports/api/v2.0";
HttpClient client = new HttpClient(new HttpClientHandler() { UseDefaultCredentials = true });
public async Task<GenericItem> GetReportAsync(Guid id)
{
client.BaseAddress = new Uri(ssrsApiURI + string.Format("/reports({0})", id));
var response = await client.GetAsync(client.BaseAddress);
var odata = response.Content.ReadAsStringAsync().Result;
return JsonConvert.DeserializeObject<GenericItem>(odata);
}
}
}
A very basic class designed to demonstrate using SSRS API Response to create a .NET object:
using Newtonsoft.Json;
using System.Collections.Generic;
namespace ExtRS
{
public class GenericItem
{
[JsonProperty("@odata.context")]
public string ODataContext { get; set; }
[JsonProperty("Id")]
public string Id { get; set; }
[JsonProperty("Name")]
public string Name { get; set; }
[JsonProperty("Path")]
public string Path { get; set; }
}
}
using System;
using System.Web.Mvc;
using System.Threading.Tasks;
using ExtRS;
namespace Daylite.Controllers
{
public class ReportsController : Controller
{
public SSRSHttpService service = new SSRSHttpService();
public async Task<ViewResult> GetReportsAsync()
{
return View("Index", await service.GetReportsAsync());
}
public async Task<ViewResult> GetFoldersAsync()
{
APIGenericItemsResponse result = await service.GetFoldersAsync();
return View("Index", result);
}
public async Task<ViewResult> GetReportAsync(Guid id)
{
GenericItem result = await service.GetReportAsync(id);
return View("Index", result);
}
}
}
Reference: https://github.com/Microsoft/Reporting-Services/tree/master/APISamples
Why NoSQL is Never Going to Replace SQL (apple:orange)
I'll take the above bad analogy further and posit that while sedans and cars on the ground require stringent rules and have to navigate much more rigid structures, a jet engine simply powers the jet ahead through constraint-less skies- it's purpose is to power something big- not to be concerned with other machinery of the craft (ie. RDBMS features eschewed by NoSQL solutions).
Do you need massive global data sync scale so that millions can connect and make changes and the results all (appear) real-time? If not, NoSQL is not always the right choice and neglecting to have any kind of schema for stored application data structures can present its own host of challenges in the future if (when) those structures change. But alas, you can use NoSQL for some things (Redis, image/BLOB storage) and an RDBMS for others (more structured records and things you want to restore to a point in time in the event of a server failure).
The SQL vs NoSQL (structured and transactional vs. semi-structured and "eventually consistent") debate is not a matter of one or the other and that's that. These are complementary technologies and should both be used- wherever you find app requirements that suggest one or the other makes the most sense.
When an application is meant to scale immensely and there is not a lot of data integrity, consistency, transaction or complex data structuring and transformation needs- NoSQL is your best bet and will far outscale even the most robust RDBMS server farm- at least at a much lower cost (at the cost of sacrificing features of an RDBMS which may not be needed).
I have personally worked on several projects that utilize relational and unstructured approaches to reading and persisting application data. If you have ever used an application's config file to change a setting in JSON or XML or a simple line entry- you are seeing a small and very basic NoSQL example of storing app data.
Using NoSQL in software development can make data structures and objects- passed to and fro from APIs and within the application itself- much more flexible to work with. 1-line to serialize an object to JSON chunk, save it to BLOB storage and forget about it.
When dealing with relational data, you really have to understand the data to write good data access code and the underlying SQL that supports well-defined structuring of complex objects.
Well-defined structuring of the persistence of complex application objects avoids data duplication/corruption, prevents breaking reference constraints and losing any sense of hierarchical data relationships and more generally lets you know very quickly when you have a problem within your data storage structures and the objects that initialize themselves from that data.
NoSQL ditches virtually all relational database data normalization rules in favor of a loosely schema'd unstructured (document, BLOB, KeyStore, etc.) data store that relies solely on keys, values and filtering unstructured metadata to get the same SELECT ... WHERE functionality found in RDBMS. Its iterations usually bear a resemblance to Java and as loosely follows here are common SQL statements and their Java or Java-derived equivalent:
The best distributed NoSQL solutions like Hadoop really shine in their inherit ability to dynamically scale to as many server machines as the operators can make ready to serve as "Hadoop processor nodes on standby".
SQL Server scaling is based more on server augmenting or "scaling up" (adding RAM, faster SSDs, RAID Arrays, etc.) rather than distributing workloads across dynamic nodes. SQL Server AlwaysOn Availability and its Mirroring and Replication feature are for recoverability and data sharing- not dynamic scaling to handle bigger and bigger workloads.
SQL has been around forever. The fundamental concept behind NoSQL (semi-structured or loosely structured data) has been around since long before SQL relational database technology. Both (along with NoSQL-related graph database paradigm) will continue to serve as viable data storage solution alternatives for many more years into the 21st century.
In fact, SQL Server 2019's Polybase extension supports Hadoop Clusters, MongoDB and Terradata T-SQL query integration. A new feature called SQL Server Big Data Clusters helps make distributed NoSQL nodes manageable within SSMS environment.
Mongo, Hadoop and other NoSQL database servers have SQL server integration to support relational data sources.
CAP Theorem: a distributed data system like most all NoSQL solutions can only achieve 2 of the 3 features: "Consistency", "Availability" and "Partition Tolerance"
ACID vs BASE: The relational axiom of "Atomic, Consistent, Isolated, Durable" contrasted against NoSQL's vague promise of "Basically Availability, Soft State, Eventual Consistency" (dirty reads common)
Many more millions of applications have been using one riff or another of NoSQL (semi-structured data) before, during and after the mythical "Relational Movement" as described by software veteran Robin Bloor:
"The Relational Model of Data Never Dominated Anyway. Estimates vary, but it is generally agreed that somewhere between 70% and 95% of the world’s data is stored only in poorly structured or unstructured formats such as: word processing documents, spreadsheets, HTML files and e-mail. The truth is that Relational database never did really dominate. It was rejected out of hand, year after year, as an effective store for many types of data." -Robin Bloor on insideanalysis.com
Considerations when evaluating whether to use NoSQL:
- NoSQL is a precise tool for precise data needs; if relational SQL is too much for your group, NoSQL will likely be too steep a learning curve
- Data Integrity- when billions of NoSQL records are affected by a small change in schema that is not able to propagate correctly or runs into constraint issues or hierachy and relations are impossible to infer... maybe relational SQL would be a better approach
- NoSQL touts loose schema structure is a benefit but this simply means schema and data structure enforcement has been shifted from the database layer to the application layer. Data cannot "self-manage".
- Some apps are prime candidates for NoSQL's document-centric and resource-centric distributed storage architecture
Also, there is this to consider:
If NoSQL solutions are eventually able to achieve the same transactional consistency and complex schema structures that some applications require and then ultimately subsume RDBMS completely- it'll still require a lot of SQL gurus to convert and integrate all the legacy relational database apps for a long, long time to come...
Bring on MongoDB, CouchDB, Dynamo, MapReduce, HBase, BigTable, Cassandra.
Long live SQL Server 2030. 😉
References:
https://blog.timescale.com/why-sql-beating-nosql-what-this-means-for-future-of-data-time-series-database-348b777b847a/
https://www.wired.com/2012/12/couchdb/
https://pdfs.semanticscholar.org/a6f0/1c9103d3bafb8ce92641c9f2a4deaccd12f9.pdf
https://www.memsql.com/blog/why-nosql-databases-wrong-tool-for-modern-application/
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:1589423200346982646
https://news.ycombinator.com/item?id=479165
https://insideanalysis.com/is-the-relational-database-doomed/
SQL CLR for .NET in SQL Server
SQL is a great data language but it is not the right language for some tasks. Creating a SQL CLR from a .NET assembly may be the best approach to some unique situations (and there is a bonus in that, in many cases you can reuse existing .NET code).
Before creating the CLR object we need a .NET .dll; so first we create a basic .NET assembly compile in Release and copy the path the the compiled .dll:
Import into SQL Server instance via SSMS*:
Create T-SQL function or stored procedure to serve as caller for the function and run it:
And that is all there is to it. Only use CLR functions when absolutely necessary as RDBMS's like SQL Server are designed to processes relational data in sets, and not to apply complex business logic on individual rows.
But if there is no other way- SQL CLRs could provide you a solution to your code/logic integration problems.
*Warning and Reference: https://blog.netspi.com/attacking-sql-server-clr-assemblies/
Accessing SQL Server Data in R
library(RODBC)
dbconnection <- odbcDriverConnect('driver={SQL Server};server=.;database=CLARO;trusted_connection=true')
initdata <- sqlQuery(dbconnection,paste('SELECT * FROM [CLARO].[dbo].[Fielding];'))
Accessing SQL Server Data in Python
After weeding out some long-abandoned and/or nonworking solutions, I discovered a very simple Python ODBC driver that works with virtually all SQL Servers since MSSQL 2005 called "pyodbc".
First, you will need to install this MSSQL ODBC (13.1 or 17 should work) component on your machine in addition to installing the pyodbc driver.
Next, get the pyodbc module for Python by running this from Windows command prompt:
pip install pyodbc
Then open up a python shell using 'py' or 'python' and enter the following after editing configuration values to match your development environment:
import pyodbc
cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER=localhost;DATABASE=WideWorldImporters;UID=DemoUser;PWD=123Password')
cursor = cnxn.cursor()
#Sample of a simple SELECT
cursor.execute("SELECT TOP (100) Comments, count(*) FROM WideWorldImporters.Sales.Orders GROUP BY Comments")
row = cursor.fetchone()
while row:
print(row[0] + ': ' + str(row[1]))
row = cursor.fetchone()
Running this code will result in the below if you have configured everything correctly (note this example makes use of the Microsoft SQL Server demo WorldWideImporters database):
OLAP: Facts and Dimensions
I'd say that implementing a true and effective OLAP environment is worth any project investment and would pay itself over and again in the way of better and more specific/actionable metrics that help administrators of operations make the best, data-backed decisions- some very critical decisions involving millions of dollars and sometimes lives. I'd like a better look at the data before making a multi-million dollar or life/death decision.
Fire Drills to Ensure Successful Disaster Recovery
This cannot be said enough. I have heard countless horror stories (and lived 1) of instances where backups were completing "successfully" but no one ever tested the restoration of those backups (which were, in fact, corrupt and not restore-able)- until it was too late. :/
Put in the time up front to make sure your data recovery strategy and high-availability guarantee actually "recover" and make "available" your data- the lifeblood of your organization.
It is important to remember that you can have a Secondary database server that is 100% in sync with the Primary database server, but if your application is not configured to make the switch (or your service host does not make the switch for you)- you will only have 1/2 of things recovered: the data, but not the data availability through the application.
Good References on Disaster Recovery Strategies and what to consider ("sensitivity of data, data loss tolerance, required availability, etc."):
https://www.sqlshack.com/sql-server-disaster-recovery/
https://support.microsoft.com/en-us/help/822400/description-of-disaster-recovery-options-for-microsoft-sql-server
ETL and the Case Against a Single System for All Business Processes
Assume the database servers are:
- CMS, Orders (Dynamics 365 data)
- Accounting (Sage 50 data)
- Warehouse, Logistics (SAP data)
- Field Service, Product Support (SalesForce data)
- Call Center (TantaComm data)
But for instances where ERP of every business function is not working, you can use products like SSIS, Informatica, and Oracle Data Integrator to easily map ETL data visually and apply code operations to the data using a variety of programming languages.
But of course, there is no "one-size-fits-all 100% of our business needs" ERP solution; different business productivity software work better or worse for different industries, different departmental needs, for all manner of different reasons. Why care about the choice of business tool that your business' management and best minds know and have proven is most effective for their- often very specific- job?
Allowing a business to integrate the "best apps for each particular type of job" is nothing more than a sort of microservices architecture. But if we are working with custom code in these various LOB applications (at least some of the LOB apps will need custom API integrations so that they can talk to each other), with each new block of custom code, comes more decentralized code bases and decentralized test scripts...
All considered, I would argue that trying to put square pegs into round holes a la "ERP-everything" is a much, much, MUCH bigger effort than a well-developed and tested scheduled ETL Data Warehouse refresh process to centralize your company's data.
It's always a balance with software. Use the most effective apps for your organization and departmental requirements, but try to avoid the dependency hell that can come along with a poorly managed distributed application environment.
Here are the ETL steps in order, from the resource link below:
Extract the data from the external data sources such as line-of-business systems, CRM systems, relational databases, web services, and anywhere you have business data you want to summarize.
Transform the data. This includes cleansing the data and converting it to an OLAP-friendly data model. The OLAP-friendly data model traditionally consists of dimension and fact tables in a star or snowflake schema.
Load the data so that it can be quickly accessed by querying tools such as reports, dashboards, etc.
"90% of the requirements now is better than 100% of the requirements never" -Unknown
Reference: https://blogs.msdn.microsoft.com/andreasderuiter/2012/12/05/designing-an-etl-process-with-ssis-two-approaches-to-extracting-and-transforming-data/
Log Shipping and Log (and/or Differential Backup) Restoration
A "transaction log" is just a subset of a subset of a full backup containing a set of all T-SQL transactions for a given (usually short span) amount of time; they are typically set at intervals of every few minutes but this varies according to application and data compliance needs.
A commenter in the referenced web link below aptly notes:
"the point of log shipping is to keep the secondary server closer to 100% synced than the full backups (which are much larger and produced much less frequently than log backups)."
Log shipping can be used to serve as a constant "warm backup" (similar to SQL Server database mirroring) of all production transactions for a standby/failover SQL Server* as well as a dedicated Backup SQL Server as depicted here:
In short, a full backup contains the entire database you to recover- up to a certain point in time.
A differential backup is a subset of a backup (generally "full" backups contain days worth of data for bigger organizations)- it would only be prudent to include one or even four differential backups to ensure quick disaster recovery (and full recovery of vital customer data).
Backup logs are chunks of SQL transactions that eventually compose the differential backup(s) and finally the entire full .bak file.
These 3 disaster recovery tools that SQL Server affords DBAs and developers allow you to restore your production data back to the point in time when your server failed:
- Apply the most recent full backup of the prod server that went down*
- Apply any differential backups since the most recent full backup
- Apply all transaction logs up to the point in time of the failure**
- Apply all StandBy server transactions logs that may have occurred (if not ReadIOnly)
*Note(!) to the questions on Microsoft Licensing and your StandBy server, should it need to be used for any production activity during your disaster recovery:
"I am not the licensing police, and I am not Microsoft – check with your licensing representative to clarify your exact situation. Generally, you can have one warm standby server. However, the second someone starts using it for reporting, testing, or anything else, you need to license it like any other server."-Brent Ozar
**"If the database is in Standby mode, users can query it – except when a log backup is being restored. You need to decide if a restore job will disconnect users, or if the restore is delayed until after the users are disconnected." -Brent Ozar
References:
https://www.sqlshack.com/sql-server-disaster-recovery/ (SQL Server Replication is another compliment or alternative to the log shipping and backup/restore)
https://www.brentozar.com/archive/2013/03/log-shipping-faq/