• From Bloated SQL to Clean C#: Decoupling Logging

    How Eliminating Database-Baked Logging Led to Lighter Code, Easier Testing, and a Reusable Component

    For too long, I wrestled with a common architectural headache: logging baked directly into stored procedures. It’s a pattern many of us fall into, seemingly convenient at first glance. But, as my application grew, this convenience quickly turned into a quagmire of redundant code, difficult testing, and a complete lack of separation of concerns.

    The problem was obvious, yet insidious. Every stored procedure that needed to log an event – whether a success, a failure, or just a key operation – had its own setup for logging and redundant calls to the same two logging procedures within the database. This wasn’t just messy; it was a fundamental violation of the Single Responsibility Principle. My stored procedures were no longer just about data manipulation; they were also responsible for logging, bloating them with noise and making them harder to understand and maintain.

    The Pain Points of Database-Baked Logging

    Let’s break down the major issues I faced:

    • No Separation of Concerns: Stored procedures should ideally focus solely on database operations. Introducing logging tangled their responsibilities, making them less focused and harder to reason about.
    • Testing Nightmares: Unit testing database logic becomes incredibly cumbersome when logging is intertwined. You’re not just testing the data operation; you’re also inadvertently testing the logging mechanism, which complicates setup and verification.
    • Code Bloat and Redundancy: Imagine dozens, if not hundreds, of stored procedures, each with similar lines of code dedicated to logging. This creates massive duplication, making global changes or bug fixes a nightmare. A simple change to how logging should work required touching countless stored procedures.
    • Reduced Readability: The core logic of the stored procedure was often obscured by the logging boilerplate. It was like trying to read a book where every other paragraph was a footnote.

    The Solution: Centralized Logging with ExecuteWithLogging<T>

    My journey to a cleaner architecture led me to a powerful realization: logging belongs higher up the stack, specifically within the application layer. This is where business operations are orchestrated, and it’s the perfect place to wrap these operations with cross-cutting concerns like logging.

    The core of my solution was implementing a generic method, ExecuteWithLogging<T>. This method acts as a wrapper around any operation that returns a value of type T.

    Here’s the high-level concept:

    C#

    public async Task<T> ExecuteWithLogging<T>(Func<Task<T>> operation, string operationName)
    {
        // Log "starting operationName"
        try
        {
            T result = await operation();
            // Log "operationName succeeded"
            return result;
        }
        catch (Exception ex)
        {
            // Log "operationName failed" with exception details
            throw; // Re-throw the exception after logging
        }
    }
    

    Now, instead of logging within a stored procedure or directly in every service method, my application layer calls look something like this:

    C#

    // Before (conceptual)
    // var data = await _dbContext.GetSomeDataFromSPWithLoggingAsync();
    
    // After (conceptual)
    var data = await _logger.ExecuteWithLogging(
        async () => await _dbContext.GetSomeDataAsync(),
        "GetSomeDataOperation"
    );
    

    The Unlocked Benefits

    The impact of this refactoring was immediate and profound:

    1. Lighter, Cleaner Database Code: All logging pollution was removed from my stored procedures. They are now lean, focused, and dedicated solely to database operations. This makes them significantly easier to understand, optimize, and maintain.
    2. Simplified Testing: Testing the database layer became a breeze. I no longer needed to worry about mocking or asserting logging calls when testing my stored procedures. My unit tests for the application layer can now easily mock the ExecuteWithLogging method or verify its calls, separating the concerns perfectly.
    3. Consistent Logging: Every operation wrapped by ExecuteWithLogging<T> now logs in a consistent, standardized way. This uniformity is invaluable for debugging, monitoring, and auditing.
    4. A Standalone Logger! Perhaps the most exciting byproduct of this effort is the creation of a truly decoupled, standalone logging component. Because ExecuteWithLogging<T> is generic and independent of specific business logic, I can now package it as a reusable library. My next step is to componentize this logger and potentially offer it to other developers via GitHub and NuGet.

    This refactoring was a huge effort, involving significant changes across the application and database layers. But the return on investment in terms of code quality, maintainability, testability, and future reusability has been immense. If you’re struggling with tangled logging in your data access layer, I highly recommend exploring a similar approach to centralize and decouple it. Your future self (and your teammates) will thank you.

    About Me

    Hey — I’m Paul, a software developer building legal tech products with .NET and a lot of stubborn curiosity. I’m passionate about turning complex business logic into clean, testable code, and I write about the ups, downs, and discoveries along the way.

    Want to follow along? I’m @PaulAJonesJr on Twitter, and I blog at PaulJonesSoftware.com. You can also reach out directly: paul@pauljonessoftware.com.

  • 11 Lessons I Didn’t Know I Was Signing Up for When I Built My First SaaS

    When I set out to build my first SaaS product, I thought the hardest part would be writing clean code.
    I figured the technical decisions would carry the most weight — choose the right stack, organize the database, write good tests, ship features.

    I had no idea I was about to get a crash course in business, legal strategy, pricing, communication, and product thinking.

    Here are 11 lessons I didn’t expect — the ones you only learn after you’re in it.

    1. You’re Not Just Building a Product — You’re Building a Platform

    Sure, you’re creating a feature or solving a problem. But that problem sits inside a wider system: authentication, user roles, tenant isolation, audit logs, deployments, and documentation. Every decision you make early on affects scalability, maintainability, and monetization down the road.

    Code is just one piece of the machine.

    2. Technical Architecture Is a Business Decision

    The way you model users, clients, and permissions can shape your pricing tiers, your UX, and your go-to-market options.

    Do users belong to one client or many? Will clients have sub-accounts? These aren’t just schema concerns — they’re business model questions.

    You can’t escape them — and eventually, you’ll learn to welcome them.

    3. There Are Monetization Opportunities Hidden in Every Layer

    I used to think there was one product to build.

    But the truth is: the infrastructure is a product.
    Features you originally build for internal use — role-based access control, tenant-aware APIs, config systems — may have standalone value. What starts as a scaffold might become a second business.

    4. Pricing Strategy Isn’t Just a Line on the Website

    Flat-rate or usage-based? Per-user or per-feature? Monthly or annual?

    Your pricing model directly affects who you attract, what they expect, how profitable you are, and whether your churn rate breaks you.

    I learned (the hard way) that pricing isn’t a footnote — it’s part of the product.

    5. Licensing Matters More Than You Think

    If you’re distributing code — or even templates, boilerplate, or docs — the license you choose shapes what others can and can’t do with it.

    MIT? GPL? Commercial license? Custom EULA?
    Each says something about your intent, your protection, and your revenue model.

    And yes — content deserves the same consideration. It’s intellectual property too.

    6. You Will Deal With Legal Stuff, Like It or Not

    Terms of service. Privacy policies. Data handling. GDPR. Liability clauses.

    If your app handles real user data or collects money, you have to address these. Even if you’re small now, your future self will be glad you didn’t cut corners.

    7. You Become the PM by Default

    Even if you’re the only developer, you still have to plan, prioritize, write specs, debug regressions, track bugs, and ship features.

    Project management is not optional. Eventually, you realize the backlog isn’t going to manage itself.

    8. Testing Becomes Mission-Critical (Not Just a Nice-to-Have)

    The more users, roles, tenants, and edge cases you support, the more fragile your app becomes.

    A strong test suite gives you confidence to refactor, scale, and ship without breaking production.
    I used to see tests as a luxury. Now, I treat them like insurance — critical infrastructure that pays for itself.

    9. Naming Things Is a Strategic Choice

    What you call something today determines how you can use it tomorrow.

    Calling something a “User” vs. an “Attendee” or “Client” vs. “Occasion” isn’t just a semantic decision — it’s about flexibility, clarity, and how reusable your foundation becomes across industries.

    Naming is architecture.

    10. Packaging Is Half the Battle

    You might think you’re building a feature.

    But what you’re really building is a system — one that could help others if you package it right.
    Your internal scaffolding, your starter templates, your RBAC engine, your tenant-aware models — all of it can be reused, sold, or open-sourced. There’s more value under the hood than you realize.

    11. Stealth Mode Is Sometimes Your Best Defense

    When you’re working on something with real potential — especially in a high-value niche — being too visible, too early can backfire.

    You attract attention, and not always the good kind. Competitors, cloners, and opportunists are watching.
    Sometimes, keeping the name, the domain, and the use case under wraps gives you the space to build it right before others catch on.

    You can still share the journey — just not all the blueprints.

    Final Thoughts

    Building your first SaaS is more than a coding exercise — it’s an education in business, product design, risk management, and long-term thinking.

    You’ll learn how to charge for your work, how to protect it, and how to talk about it with clarity and confidence.

    I didn’t know I was signing up for all that. But I’m glad I did.

    About the Author

    I’m a software developer and entrepreneur building tools at the intersection of law, logic, and software.
    My journey into SaaS has taught me more than I expected — about business models, technical architecture, and the surprising complexity of “simple” products.

    I write about software development, technical product design, and the lessons learned along the way.

    📬 Blog: PaulJonesSoftware.com
    🐦 Twitter/X: @PaulAJonesJr
    💼 LinkedIn: linkedin.com/in/paulajonesjr

    If you’re building something, thinking through an idea, or just curious about the realities of turning code into a business, I’d love to connect.

    Follow along — and let’s build something meaningful.

  • How to Make Properties Immutable After Initialization in C#

    Author: Paul A. Jones Jr., MSE

    In domains like legal tech, consistency is everything. Once critical information like a discovery date or a statutory deadline is captured, it should never be silently modified elsewhere in the system. Enforcing immutability in your data models ensures reliability and protects the integrity of calculations driven by complex jurisdictional rules.
    This article walks through several practical approaches to making object properties immutable after initialization in C#, from traditional readonly fields to modern record types and init setters introduced in C# 9.

    Why Immutability Matters (Especially in Legal Tech)

    • Prevents accidental mutation.
    • Supports thread-safety and predictability.
    • Aligns with functional programming best practices.
    • Helps you build reliable, testable domain logic.

    Target Compatibility

    • .NET Version: .NET 6 or higher
    • C# Version: C# 9.0+

     

    Three Approaches to Property Immutability in C#

    Method 1: Readonly Auto-Properties (Classic Approach)

    class SearchCriteria {
        public readonly int CaseTypeId;
        public readonly int JurisdictionId;
        public readonly DateTime IncidentDate;
    
        public SearchCriteria(int caseTypeId, int jurisdictionId, DateTime incidentDate) {
            CaseTypeId = caseTypeId;
            JurisdictionId = jurisdictionId;
            IncidentDate = incidentDate;
        }
    }

    Pros: Simple and effective
    Cons: No object initializer support, not true properties

    Method 2: Init-Only Setters (C# 9.0 and Later)

    class SearchCriteria {
        public int CaseTypeId { get; init; }
        public int JurisdictionId { get; init; }
        public DateTime IncidentDate { get; init; }
    }
    
    var criteria = new SearchCriteria {
        CaseTypeId = 1,
        JurisdictionId = 2,
        IncidentDate = DateTime.Now
    };

    Pros: Supports object initializers, full property semantics
    Cons: Potential exposure via reflection

    Method 3: Use a record Type

    public record SearchCriteria(
        int CaseTypeId,
        int JurisdictionId,
        DateTime IncidentDate
    );

    Pros: Succinct, immutable by default, value equality
    Cons: Requires C# 9+, new to some teams

    Choosing the Right Strategy

    Method Ideal When…
    Readonly fields You want absolute immutability with no setters
    Init setters You need immutability with flexible initialization
    Records You want full immutability and value semantics

    Final Thoughts

    Immutability isn’t just a programming nicety — in domains like legal tech, it’s a safeguard. Whether you’re building jurisdiction-aware logic or designing rules engines that drive your system, immutability keeps your data consistent and your logic dependable.

    Choose the approach that best fits your architecture, and lean into the safety and predictability that immutable objects provide.

     

  • The Developer’s New Frontier: How Building SaaS Is Like Investing in Real Estate

    By Paul A. Jones Jr., for PaulJonesSoftware.com

    When most people think about investing, they imagine stocks or property portfolios. But as a developer building SaaS (Software as a Service) applications, I’ve come to realize something striking: a well-built SaaS product is a digital income property. In fact, it behaves so much like real estate that developers should start thinking of themselves as digital landlords.

    From “tenants” to “leases,” from “monthly rent” to “property maintenance,” the metaphors go far beyond surface-level. SaaS is the real estate of the internet.

    📦 SaaS = Digital Property

    Let’s start with the basics.

    A SaaS app is a property you build.
    You develop its foundation (the backend), raise its walls (the UI), wire it for power (APIs), and furnish it (features and UX). Once it’s ready, you “rent” it out via subscriptions.

    Your tenants (users) move in with monthly recurring payments — MRR — essentially digital rent.

    And just like a landlord, your job is to keep the place in good shape:

    • Patch bugs (fix the plumbing)
    • Scale infrastructure (install a new HVAC)
    • Add new features (build an addition)
    • Support users (answer service requests)

    Neglect any of these and users will churn, just like tenants breaking a lease on a dilapidated building.

    💰 Cash Flow, Equity, and Digital Appreciation

    Just as property appreciates in value when it’s well-maintained and fully occupied, SaaS equity grows over time. It appreciates in both value and reliability.

    SaaS rewards you in two powerful ways:

    1. Cash flow — recurring revenue from subscriptions (just like monthly rent)
    2. Equity — the growing resale value of the app (based on revenue, churn rate, brand, and tech)

    And like real estate, SaaS is sellable. Marketplaces exist for flipping profitable apps. The more stable and optimized your “property,” the higher the multiple.

    🏘️ A Tale of Two Portfolios

    Compare a landlord and a SaaS founder:

    Real Estate InvestorSaaS Founder
    Buys/builds propertyBuilds an app
    Finds tenantsAcquires users
    Collects rentCollects subscriptions
    Maintains propertyMaintains codebase
    Increases value with upgradesAdds features, improves UX
    Can sell or refinanceCan sell or scale with funding

    At scale, both create freedom through cash-flowing assets.

    🧱 Maintenance = Retention

    You wouldn’t let a roof leak in a rental home. You shouldn’t let bugs pile up in your app.

    Upkeep is everything:

    • UX/UI = Curb appeal
    • Onboarding = Tenant welcome packet
    • Feature upgrades = Renovations
    • Backups, uptime = Property insurance

    Even marketing parallels real estate. Just like listings, you need SEO, social proof, and clear value props to attract “tenants.”

    SaaS even has leases — annual billing cycles, contracts, and EULAs.

    🧠 Portfolio Thinking

    Savvy SaaS developers think like real estate investors:
    One unit is good — but a portfolio is better.

    You can:

    • Build multiple apps targeting adjacent niches
    • Cross-promote between products
    • Share infrastructure and operational knowledge
    • Diversify your income streams and risk exposure

    Every additional product adds digital doors to your property portfolio.

    ⏳ Exit Strategies and Working Retirement

    You might not want to do this forever — and that’s okay.

    You can:

    • Sell your SaaS (like selling a rental)
    • Hire someone to maintain it (property manager equivalent)
    • Keep it as a passive cash-flowing asset (retired landlord model)

    That’s the beauty: you’re not just building code — you’re building freedom.

    💬 Final Thoughts: Retire on Your Code

    If you’ve ever looked at a seasoned real estate investor and admired how they live off their portfolio, realize this:

    You can do the same — with code.

    • You’re the builder
    • The landlord
    • The asset manager
    • And eventually, the investor with exit optionality

    Your code is your property.
    Your MRR is your rent roll.
    Your users are your tenants.

    Treat your apps like digital buildings. Maintain them. Monetize them. Modernize them.

    Because done right, a SaaS portfolio is more than just a business — it’s a retirement plan you can live with, live off, and be proud of.

    About the Author

    Paul A. Jones Jr. is a software engineer and legal tech founder developing tools for professionals in law and other regulated industries. He writes about systems thinking, modern workflows, and the future of expert services at PaulJonesSoftware.com. Follow him on Twitter: @PaulAJonesJr.

  • When Delete Doesn’t Mean Gone: The Quiet Power of Soft Deletes in Legal Tech

    In legal applications, “deleting” data is rarely as simple as it sounds. Records often represent more than rows in a database — they carry evidentiary weight, compliance implications, or even the history of a professional judgment. For software engineers building systems that serve lawyers, firms, or institutions subject to regulation, soft deletes are not just a technical convenience — they are essential infrastructure.

    This article explores why soft deletes matter in legal tech, what roles they serve in real-world applications, and how to ensure they’re implemented and tested correctly.

    The Case for Keeping What’s Been Deleted

    In typical web apps, a delete button might remove a user or document permanently. But in legal systems, permanence is the enemy of accountability if it comes without a trail. Legal users — whether they are lawyers, paralegals, or administrators — often need the ability to “remove” something from visibility while still preserving a historical record for audit or regulatory purposes.

    Soft deletes solve this problem by marking records as deleted (often with an IsDeleted or DeletedAt field) rather than removing them from the database. This lets systems:

    • ✅ Maintain a complete audit trail of user activity
    • ✅ Support “undelete” or record recovery flows
    • ✅ Comply with data retention or recordkeeping rules
    • ✅ Protect against accidental or malicious loss of important data
    • ✅ Enable internal reviews and forensic analysis

    From the user’s perspective, the data is gone. From the system’s perspective, it’s archived and protected.

    Testing for Success: More Than Just “Did It Disappear?”

    Implementing soft deletes correctly requires more than just toggling a flag in the database. Developers must be deliberate about how they filter, retrieve, and report on soft-deleted data.

    Here’s what to look for in a good test suite:

    ✅ 1. Visibility Filtering

    var activeItems = service.GetAll();
    Assert.IsFalse(activeItems.Any(x => x.IsDeleted));
    

    ✅ 2. Data Preservation

    var record = service.GetById(id, includeDeleted: true);
    Assert.IsTrue(record.IsDeleted);
    

    ✅ 3. Reversibility (Optional but Valuable)

    service.Restore(id);
    var restored = service.GetById(id);
    Assert.IsFalse(restored.IsDeleted);
    

    ✅ 4. Logging and Audit

    In legal tech, every delete should leave a trace. Testing that audit logs are written can be just as important as testing the delete action itself.

    What Soft Deletes Are Not

    It’s worth noting that soft deletes are not a substitute for formal archiving or regulatory compliance tools. They’re a development pattern — a way to give systems flexibility, resilience, and traceability. But they need to be paired with thoughtful access controls, logging strategies, and backup policies to be truly effective in sensitive domains.

    Designing for Trust

    In law, trust is currency. Software that silently erases data undermines that trust. Soft deletes offer a quiet but powerful way to support legal workflows while maintaining a verifiable system of record. Whether you’re building internal tools, client portals, or data infrastructure for a firm or platform, designing with deletion in mind is part of designing for accountability.


    About the Author

    Paul A. Jones Jr. is a software engineer and legal tech founder developing tools for professionals in law and other regulated industries. He writes about systems thinking, modern workflows, and the future of expert services at PaulJonesSoftware.com. Follow him on Twitter: @PaulAJonesJr.

  • The Harsh Realities of Building in Public in 2025: A Developer’s Take

    Why today’s tech environment makes building in public more exhausting than empowering — and what developers can do instead.

    I saw a tweet last week from a fellow developer who boldly declared that “Building in public in 2025 sucks!” It got me thinking — what’s changed? Just a few years ago, building in public was the golden ticket for indie developers and startup founders. Everyone was sharing their revenue dashboards, development progress, and behind-the-scenes stories to great fanfare.

    So why the sudden shift in sentiment? After diving into this topic, I think I understand the frustration.

    The Golden Age is Over

    Building in public had its moment. In the early 2020s, it felt fresh and authentic. Developers like Pieter Levels, makers on Indie Hackers, and countless Twitter builders were sharing everything — their code commits, revenue milestones, even their failures. The community was smaller, more intimate, and genuinely supportive.

    But like all good things on the internet, it got saturated. What was once genuine transparency became performative theater. Now everyone’s “building in public,” and the signal-to-noise ratio has plummeted.

    The Dark Side Emerges

    Your Competitors Are Taking Notes

    Here’s the uncomfortable truth: when you share your metrics, strategies, and operational details publicly, you’re essentially giving competitors access to your playbook. They can steal your ideas, refine them, and potentially gain an advantage over you.

    That innovative feature you’re working on? That clever marketing strategy that’s driving growth? Your unique positioning in the market? All of it becomes public knowledge the moment you hit “tweet.” It only takes one motivated competitor to cause real damage.

    The Relatability Problem

    As your success grows, your audience may find it harder to relate to your journey. What started as an inspiring underdog story can become difficult for people to connect with once you’re clearly successful.

    Pat Flynn, one of the pioneers of income reporting, eventually stopped sharing his numbers for exactly this reason. When your “transparent journey” turns into humble-bragging about six-figure months, you’ve lost the plot.

    The Content Treadmill

    Building in public creates an insatiable content monster. Your audience expects constant updates, behind-the-scenes content, and emotional transparency. What starts as authentic sharing becomes a full-time content creation job.

    As a developer, I want to spend my time building great software — not crafting the perfect “here’s what I learned this week” thread for Twitter.

    The 2025 Reality Check

    The internet landscape has changed dramatically since building in public first gained popularity:

    • Information Overload: Everyone’s sharing everything. Your authentic journey gets lost in a sea of similar stories.
    • Algorithm Fatigue: Social platforms prioritize engagement over authenticity, pushing creators toward clickbait.
    • Copycat Culture: The moment something works, it’s copied endlessly. Innovation gets commoditized.
    • Privacy Concerns: We’re more aware than ever of the value of our data. Sharing everything publicly can feel reckless.

    When Building in Public Still Makes Sense

    I’m not saying building in public is completely dead. It can still work, but the context matters:

    • Early Stage Validation: Transparency can help gather feedback and build an early audience.
    • Educational Content: Sharing lessons learned — without giving away strategy — builds authority.
    • Community Building: Transparency can build trust in niche or mission-driven communities.

    The key is being strategic about what you share — and why.

    The Smarter Approach

    Instead of building completely in public, consider a shift toward “building in community.” This means:

    • Share selectively with trusted peers and mentors.
    • Build relationships with developers and entrepreneurs in private spaces.
    • Focus on product and customer development over audience development.
    • Document your journey for yourself first — public consumption second.

    The Bottom Line

    That developer’s tweet resonated because many of us are feeling the same fatigue. Building in public promised connection and growth, but often delivered anxiety and competitive risk.

    The most successful developers I know today are quietly building great products, focusing on their customers, and sharing thoughtfully — not constantly. They’ve realized that the best marketing is building something people love, not performing a journey for an audience that may never convert.

    Maybe it’s time to close the laptop on building in public and open it on building great software instead.


    Let’s Keep the Conversation Going

    If this resonated with you, I’d love to hear your thoughts. Have you pulled back from building in public? Are you navigating similar challenges?

    📩 Subscribe to my newsletter at PaulJonesSoftware.com to get posts like this — and honest reflections on startup life — delivered straight to your inbox.

    📣 Follow me on social:

    Thanks for reading — and keep building smart.

  • Why I’m Not Building One Calculator – I’m Building 50

    When I first started working on my software platform, I imagined the core of the system as a single, elegant calculator. The logic seemed simple enough: take a few inputs, apply some business rules, return a result. But as I dug deeper into the real-world rules I needed to model, I quickly realized something: I’m not building one calculator. I’m building fifty.

    Each jurisdiction — in my case, each U.S. state — operates under its own unique framework. What counts as a valid rule in one state might be completely irrelevant in another. Even basic concepts like “how long you have to act” or “when the clock starts ticking” vary wildly depending on where you are.

    The Illusion of Uniformity

    At the surface, many rule systems look the same. You assume you’ll just have to deal with a few variations in terminology or maybe a couple of edge cases. But once you start reading the fine print, it hits you: the edge cases are the system.

    For example, one jurisdiction might say the timer starts on the date of the incident. Another might add a “discovery rule,” allowing extra time if the issue wasn’t known right away. Some include hard deadlines, even if the problem was discovered late. Others carve out special exceptions based on profession, location, or relationship between the parties involved.

    The Shift in Thinking

    This forced a shift in how I think about architecture. I couldn’t build a generic engine that applied rules equally across all states. Instead, I had to acknowledge that each jurisdiction was essentially a self-contained rules engine — a little program of its own with its own logic, exceptions, and calculations.

    That means my platform isn’t hosting one calculation engine. It’s hosting 50 — one per state. And possibly more, if you account for local, federal, or cross-border differences.

    The Engineering Response

    From a technical standpoint, that’s a different beast. It requires:

    A shared structure for inputs (I call this SearchCriteria) A clean way to route requests based on jurisdiction A rule system that’s modular, testable, and easy to extend A way to encode exceptions and edge cases without cluttering the core logic

    I’ve chosen South Carolina as my pilot state. It’s where I live, and I’m already familiar with some of its legal patterns. Starting there gives me a concrete case to work with as I figure out how best to structure the others.

    Why This Matters

    This isn’t just about law. Any system that models real-world regulations, compliance standards, or region-specific rules will eventually run into the same challenge. Uniformity is a comforting illusion — but if you design for it, your app will fall apart under the weight of real complexity.

    Instead, embrace the differences early. Design for change. And expect to build more than you thought you would.

    Coming Thursday: I’ll be sharing how I structured the SearchCriteria class to keep user input clean and logic-specific code isolated. The goal? Modularity and long-term sanity.

    About Me

    I’m Paul A. Jones, Jr., a developer and founder building in public.

    After years in consulting, I’m launching my own software company to solve complex, rule-based problems in law and beyond. I write about bootstrapping SaaS, domain-driven design, C# and .NET development and the real challenges of turning professional workflows into software.

    If you’re into system architecture, legal tech, or the startup grind from a developer’s POV, my blog is for you.

    📬 Subscribe to the blog: PaulJonesSoftware.com

    🐦 Follow along on Twitter/X: @PaulAJonesJr

    🔧 Building in public. No fluff. Just real dev work, every week.

  • ADO.NET Stored Procedures: A Practical Guide with C# Code and a Test Database

    Stored procedures simplify database interactions, boost performance, and enhance security. If you’re using ADO.NET, you have several ways to retrieve data from a stored procedure. This guide explores six common approaches with practical C# code examples—plus a sample SQL Server database and a fully functional console application to help you test them.


    Setting Up the Sample Database

    Before we dive into ADO.NET, let’s create a SQL Server database with sample data to work with.

    Database Creation & Seed Script

    -- Create a sample database
    CREATE DATABASE SampleDB;
    GO
    
    USE SampleDB;
    GO
    
    -- Create a Users table
    CREATE TABLE Users (
        UserID INT IDENTITY(1,1) PRIMARY KEY,
        UserName NVARCHAR(100) NOT NULL,
        Email NVARCHAR(100) UNIQUE NOT NULL,
        CreatedDate DATETIME DEFAULT GETDATE()
    );
    GO
    
    -- Seed sample data
    INSERT INTO Users (UserName, Email) VALUES 
    ('Alice', 'alice@example.com'),
    ('Bob', 'bob@example.com'),
    ('Charlie', 'charlie@example.com');
    GO
    
    

    Stored Procedures for ADO.NET Retrieval Methods

    Here are stored procedures that match the retrieval methods covered in this article.

    1. Get User Count (Output Parameter)

    CREATE PROCEDURE GetUserCount
        @UserCount INT OUTPUT
    AS
    BEGIN
        SELECT @UserCount = COUNT(*) FROM Users;
    END;
    GO
    

    2. Get Record Status (Return Value)

    CREATE PROCEDURE GetRecordStatus
    AS
    BEGIN
        RETURN (SELECT COUNT(*) FROM Users WHERE CreatedDate >= DATEADD(DAY, -30, GETDATE()));
    END;
    GO
    

    3. Get All Users (DataReader, DataTable)

    CREATE PROCEDURE GetUsers
    AS
    BEGIN
        SELECT UserID, UserName, Email FROM Users;
    END;
    GO
    

    4. Accept Parameter & Return Output

    CREATE PROCEDURE GetUserNameById
        @UserID INT,
        @UserName NVARCHAR(100) OUTPUT
    AS
    BEGIN
        SELECT @UserName = UserName FROM Users WHERE UserID = @UserID;
    END;
    GO
    

    Retrieving Data in ADO.NET

    1. Using Output Parameters

    using (SqlConnection conn = new SqlConnection(connectionString))
    {
        SqlCommand cmd = new SqlCommand("GetUserCount", conn);
        cmd.CommandType = CommandType.StoredProcedure;
    
        SqlParameter outputParam = new SqlParameter("@UserCount", SqlDbType.Int) 
        { 
            Direction = ParameterDirection.Output 
        };
        cmd.Parameters.Add(outputParam);
    
        conn.Open();
        cmd.ExecuteNonQuery();
        
        int userCount = (int)outputParam.Value;
    }
    

    2. Using Return Values

    using (SqlConnection conn = new SqlConnection(connectionString))
    {
        SqlCommand cmd = new SqlCommand("GetRecordStatus", conn);
        cmd.CommandType = CommandType.StoredProcedure;
    
        SqlParameter returnParam = new SqlParameter 
        { 
            Direction = ParameterDirection.ReturnValue 
        };
        cmd.Parameters.Add(returnParam);
    
        conn.Open();
        cmd.ExecuteNonQuery();
    
        int statusCode = (int)returnParam.Value;
    }
    

    3. Using ExecuteScalar() for a Single Value

    using (SqlConnection conn = new SqlConnection(connectionString))
    {
        SqlCommand cmd = new SqlCommand("SELECT COUNT(*) FROM Users", conn);
        conn.Open();
        
        int userCount = (int)cmd.ExecuteScalar();
    }
    

    4. Using a DataReader for Multiple Rows

    using (SqlConnection conn = new SqlConnection(connectionString))
    {
        SqlCommand cmd = new SqlCommand("GetUsers", conn);
        cmd.CommandType = CommandType.StoredProcedure;
        
        conn.Open();
        using (SqlDataReader reader = cmd.ExecuteReader())
        {
            while (reader.Read())
            {
                Console.WriteLine(reader["UserName"]);
            }
        }
    }
    
    

    5. Using a DataTable or DataSet for Bulk Data

    using (SqlConnection conn = new SqlConnection(connectionString))
    {
        SqlCommand cmd = new SqlCommand("GetUsers", conn);
        cmd.CommandType = CommandType.StoredProcedure;
        
        SqlDataAdapter adapter = new SqlDataAdapter(cmd);
        DataTable dt = new DataTable();
        adapter.Fill(dt);
    
        foreach (DataRow row in dt.Rows)
        {
            Console.WriteLine(row["UserName"]);
        }
    }
    

    6. Passing an Input Parameter & Receiving an Output Parameter

    using (SqlConnection conn = new SqlConnection(connectionString))
    {
        SqlCommand cmd = new SqlCommand("GetUserNameById", conn);
        cmd.CommandType = CommandType.StoredProcedure;
    
        // Input parameter
        cmd.Parameters.AddWithValue("@UserID", 1);
    
        // Output parameter
        SqlParameter outputParam = new SqlParameter("@UserName", SqlDbType.NVarChar, 100) 
        { 
            Direction = ParameterDirection.Output 
        };
        cmd.Parameters.Add(outputParam);
    
        conn.Open();
        cmd.ExecuteNonQuery();
    
        string userName = outputParam.Value.ToString();
        Console.WriteLine($"User Name: {userName}");
    }
    

    Running These Methods in a Console Application

    To make testing even easier, here’s a C# console application that implements all the examples above. Just update your connection string and run the app.

    Program.cs

    using Microsoft.Data.SqlClient;
    using System.Data;
    
    namespace MyConsoleApp
    {
        internal class Program
        {
            private const string _connectionString = @"Server=Judah;Database=SampleDB;Trusted_Connection=True;TrustServerCertificate=True;";
    
            static void Main(string[] args)
            {
                UseOutputParameters(_connectionString);
                GetAReturnValue(_connectionString);
                GetASingleValue(_connectionString);
                GetMultipleValues(_connectionString);
                GetMultipleValuesWithADatatable(_connectionString);
                UseIOParameters(_connectionString);
    
                Console.ReadKey();
            }
    
            // Define methods here...
        }
    }
    

    Download the complete project!

    About the Author

    Paul A. Jones Jr. is a software engineer and legal tech founder developing tools for professionals in law and other regulated industries. He writes about systems thinking, modern workflows, and SaaS applications at PaulJonesSoftware.com. Follow him on Twitter: @PaulAJonesJr.

    You Might Also Enjoy

  • The Future of Employment in an AI-Driven Economy

    The conversation started innocuously enough at a local gathering. A UPS worker shared a story that perfectly encapsulates one of the most pressing issues of our time: the rapid displacement of human workers by AI-powered automation.

    “Contractors came in quietly one day,” he explained, “installing cameras throughout our facility. We thought it was just routine security upgrades. Turns out, they were studying our every movement—how we picked packages, our walking patterns, our sorting techniques. That data became the blueprint for robots that eventually replaced 2,000 of our colleagues.”

    This isn’t science fiction. It’s happening right now, across industries, as companies leverage artificial intelligence and robotics to optimize operations and reduce labor costs. The question isn’t whether this technological shift will continue—it’s how we’ll adapt to it.

    The Quiet Revolution in Plain Sight

    The UPS example illustrates a troubling pattern: workers unknowingly training their own replacements. This stealth approach to data collection has become commonplace as companies seek to minimize resistance while maximizing efficiency gains. The ethical implications are significant, but the business case is often compelling enough to override concerns about transparency.

    Amazon has pioneered similar approaches in their fulfillment centers, where human workers operated alongside data-collecting systems for years before widespread robotic implementation. The company now employs over 750,000 robots across their facilities, handling tasks from inventory management to package sorting that once required human hands.

    Beyond the Warehouse: AI’s Expanding Footprint

    The impact extends far beyond package handling. McDonald’s has been testing AI-powered drive-through systems that can take orders more accurately than human employees, while simultaneously reducing labor costs. Early trials show these systems can handle complex orders and even upsell customers more effectively than their human counterparts.

    In manufacturing, companies like Tesla and Ford have implemented AI-guided robotic systems that can perform quality control inspections with superhuman precision. These robots work 24/7 without breaks, sick days, or benefits, making them attractive alternatives to human inspectors.

    The trucking industry faces perhaps the most dramatic transformation. Companies like Waymo and Aurora are developing autonomous freight vehicles that could eventually eliminate the need for the 3.5 million truck drivers currently employed in the United States. While full automation is still years away, pilot programs are already demonstrating the technology’s potential.

    Even white-collar work isn’t immune. AI systems are now capable of reviewing legal documents, processing insurance claims, and performing financial analysis tasks that previously required human expertise. IBM’s Watson, for instance, can analyze medical data and suggest treatment options faster than human doctors, though it still requires human oversight.

    The Human Cost

    The statistics are sobering. The McKinsey Global Institute estimates that by 2030, between 400 million and 800 million jobs worldwide could be displaced by automation. In the United States alone, up to 73 million jobs may be at risk.

    For workers like the UPS employee I met, this isn’t an abstract economic trend—it’s a personal crisis. These are people with mortgages, families, and decades of experience in their fields who suddenly find their skills obsolete. The psychological impact of technological displacement often goes unaddressed in corporate boardrooms focused on efficiency metrics and profit margins.

    The challenge is particularly acute for middle-aged workers who may struggle to retrain for new careers. A 50-year-old package handler can’t easily transition to becoming a software developer or data analyst. The skills gap between displaced workers and available jobs continues to widen as automation accelerates.

    Adaptation or Extinction

    The harsh reality is that this transformation is unstoppable. Companies that fail to adopt AI and automation risk being outcompeted by those that do. The question becomes: how do we manage this transition responsibly?

    Some companies are taking proactive approaches. Amazon has pledged $700 million to retrain 100,000 employees for higher-skilled positions within the company. Walmart has invested heavily in upskilling programs to help workers transition from routine tasks to more complex roles involving technology management and customer service.

    The most successful adaptations often involve human-AI collaboration rather than complete replacement. In many cases, AI handles routine, repetitive tasks while humans focus on complex problem-solving, creative work, and interpersonal relationships that machines can’t replicate effectively.

    The Path Forward

    For individual workers, the message is clear: continuous learning and adaptation are no longer optional—they’re survival skills. This might mean developing technical skills that complement AI systems, focusing on uniquely human capabilities like emotional intelligence and creative problem-solving, or pivoting to industries that are less susceptible to automation.

    For policymakers, the challenge is creating support systems for displaced workers while fostering innovation. This could include expanded unemployment benefits during transition periods, publicly funded retraining programs, or even exploring concepts like universal basic income as automation eliminates traditional employment opportunities.

    Companies, too, have a role to play. Transparent communication about automation plans, investment in worker retraining, and gradual implementation that allows for workforce adjustment can help minimize the human impact of technological progress.

    The Double-Edged Sword

    It’s important to acknowledge that AI automation isn’t inherently evil. These technologies can eliminate dangerous jobs, reduce workplace injuries, and free humans from repetitive tasks to focus on more meaningful work. They can also drive economic growth, reduce costs for consumers, and enable innovations that create entirely new industries and job categories.

    The internet eliminated some jobs but created millions of others. The same could be true for AI, but the transition period may be more challenging due to the speed and scope of change.

    Conclusion

    The robot revolution is already here, quietly transforming workplaces from UPS facilities to corporate offices. The workers who will thrive in this new landscape are those who recognize the change coming and take proactive steps to adapt. Companies that manage this transition thoughtfully will not only achieve their efficiency goals but also maintain the human capital needed for long-term success.

    As I told the UPS worker that night: “You can either get out ahead of the technology or get run over by it.” The choice, both individually and collectively, is ours to make. The question isn’t whether AI will reshape the workforce—it’s whether we’ll shape that transformation in a way that benefits everyone, not just the shareholders of automation companies.

    The future of work is being written now, one algorithm and one displaced worker at a time. How that story ends depends on the choices we make today.

    Enjoyed the Content?

    Follow me on Twitter | YouTube | Instagram | LinkedIn for more insights on IT contracting, IT recruiting, and career growth.

    Subscribe to my blog to stay updated with the latest tips, strategies, and real-world advice for IT professionals!

  • Developers aren’t just building products anymore. They’re building empires.

    By Paul A. Jones, Jr., MSE | PaulJonesSoftware.com

    For experienced developers, building software isn’t the hard part anymore. The real challenge is learning how to turn that skill into sustainable income. Enter SaaS — Software as a Service. It’s not just a trend. It’s a business model that turns code into recurring revenue.

    What is SaaS?

    SaaS stands for Software as a Service. Instead of selling your software once (like boxed software of the past), you host it online and charge users a recurring fee — monthly or yearly — to access it. Think: Gmail, Notion, GitHub, Calendly, and Dropbox. These platforms don’t live on your device; they live in the cloud and update continuously.

    Why SaaS Makes Sense for Developers

    Developers have never had more tools at their fingertips to create powerful, scalable web applications. But with SaaS, the upside multiplies:

    • 💡 Recurring Revenue: Get paid every month, not just once.
    • 🌍 Global Reach: Anyone with a browser can become a user.
    • 📈 Scalability: Serve 10 users or 10,000 with minimal friction.
    • 🧠 AI Integration: Use tools like OpenAI’s GPT API to add smart features that wow your customers.
    • 📊 Analytics: Monitor usage, behavior, and feedback in real time.

    Example: From Code to Cash

    A simple job tracking tool helps contractors assign crews, track hours, and manage permits from the field. For $45/month, it’s far more efficient than paper or texts. 250 small crews on board? That’s $11,250/month in reliable MRR.

    The Future of SaaS is Bright

    We’re entering a golden age for solo founders and micro-SaaS builders. Thanks to low-cost cloud services, powerful APIs, and AI integration, one skilled developer can build and launch a profitable SaaS business with minimal overhead.

    What’s Next?

    Stop building apps for a paycheck. Start building systems that pay you — again and again. SaaS is more than a toolset. It’s a mindset shift. If you can build it, you can monetize it.

    Enjoyed the Content?

    Follow me on Twitter | YouTube | Instagram | LinkedIn for more insights on IT contracting, IT recruiting, and career growth.

    Subscribe to my blog to stay updated with the latest tips, strategies, and real-world advice for IT professionals!

    Paul Jones is a seasoned software engineer and SaaS architect helping developers transition from coders to founders. Follow along for insights on legal tech, AI integration, and monetization strategies.