Skip to main content

Blog posts

Simple Software Engineering: Use plugin architectures to improve encapsulation

Let's say that you have to write an API service. The service handles different types of API calls - calls coming from third parties, your company's apps, and your company's internal servers.

There will inevitably be call sites that behave differently for different services. Imagine some examples: third-party services use a different authorization method, internal-only responses get debugging headers, error payloads are formatted differently for apps. They all use different loggers. The list goes on.

By the end of implementation, many call sites will have logic that is conditional on the service type.

// This method has two different call sites that depend on the service:
// logging and the authorization handler.
public function handleAuthorization(Service $service, Request $request) {
    $auth = ($service->type === Service::TYPE_THIRD_PARTY)
        ? new Service\Auth\ThirdParty()
        : new Service\Auth\SomethingElse();

    $logger = null;
    switch($service->type) {
    case Service::TYPE_THIRD_PARTY:
        $logger = Service\Logger\ThirdParty();
        break;
    case Service::TYPE_APP:
        $logger = Service\Logger\App();
        break;
    case Service::INTERNAL:
        $logger = Service\Logger\Debug();
        break;
    default:
        throw new Exception('Unrecognized logger');
    }

    try {
        $auth->check($request);
    } catch (Exception $e) {
        $logger->logError($e, [/* some data */]);
        throw $e;
    }

    $logger->logInfo('successful authorization', [/* some data */]);
}

When many call sites behave differently based on the same set of conditions, this can be considered a "two dimensional problem." I don't think there's an official definition to this. I just like to think about it this way. One dimension comprises each of the different conditions. The other dimension comprises all of the call sites depending on the service type. In the API service example, the service dimension has…

[third-party API calls, calls from company apps, calls from internal servers]

and the call site dimension has...

[Handling authorization, header response selection, response formatting, logging]

In this example, there are 12 combinations to be considered (three origins * four call sites). If a new call site is added, there are 15 total considerations. If a new origin is added after that, there are 20 considerations. This will likely grow geometrically over time.

It's tempting to say, "This example should be dependency-injected anyways." But this is just a demo of the problem. Dependency injection doesn't solve the real problem, which is that the service definitions have no cohesion. The service is a first-class concept within the API. But the definition of each service is scattered throughout the codebase, which leads to some problems.

Switching on types is error-prone across several usages.

Writing cases manually is error-prone. When adding a new type, the author must vet every call site that handles types. The new type might need to be specially handled in one of them. These call sites can be hard to enumerate: it could include all locations where any of the types are checked. Even worse, the list can include sites where the logic works because of secondary effects. For instance, "if this logger type also implements this other interface, do this other logic" might be attempting to define logic for the single service that provides that interface type.

Let's say that the whole API team gets hit by a bus. It's sad, but we must increase shareholder value nonetheless. The old team began a new project: adding an API service handling our new web app! So the replacement team defines authorization and response logging and launches the service into production. But they missed a few cases. A few weeks after launch, the new web service is down for two hours and no pages were fired. After some investigation, it turns out that the wrong logger was used and the monitoring service ignored errors from unrecognized services. Later, the company pays out a security bounty because internal-only debug headers were leaked. These are plausible outcomes of dealing with a low-cohesion definition - because the entire definition can't be considered at once, it's easy to overlook things that cause silent failures.

Switching on types has low cohesion.

When logic depends on the same conditions throughout the codebase, the cohesion of that particular concept is low or nonexistent. This makes sense: the service's definition is scattered throughout the codebase. It would be better if all of these definitions were grouped behind the same interface. This makes it easy to describe a service: a service is the collection of definitions inside an implementation of the interface.

Prefer plugin architectures

Me in front of the painted ladies in San Francisco

Marveling at the Painted Ladies on a recent trip to San Francisco. An obvious example of plugin architecture if I've ever seen one.

What does the code example look like within a plugin architecture?

// Provides a cohesive service definition.
interface ApiServicePlugin {
    public function getType(): int;
    public function getAuthService(): Service\Auth;
    public function getLogger(): Service\Logger;
    public function getResponseBuilder(): Api\ResponseBuilder;
}

// Allows per-service objects or functions to be retrieved.
class ApiServiceRegistry {
    public function registerPlugin(ApiServicePlugin $plugin): void;
    public function getAuthService(int $service_type): Service\Auth;
    // Not shown: other getters
}

public function handleAuthorization(Service $service, Request $request) {
    // Note: These would likely be dependency-injected.
    $auth = $this->registry->getAuth($service->type);
    $logger = $this->registry->getLogger($service->type);

    try {
        $auth->validate($request);
    } catch (Exception $e) {
        logger->logError($e, [/* some data */]);
        throw $e;
    }

    $logger->logInfo('success', [/* some data */]);
}

The plugin interface improves cohesion.

The plugin provides a solid definition of an API service. It's the combination of authorization, logging, and the response builder. Every implementation will correspond to a service, and every service will have an implementation.

Plugins enforce that all cases are handled for each new service.

It's impossible to add a service without implementing the full plugin definition. Therefore, every single call site will be handled when a new service is added.

Adding a new call site means that every service will be considered.

When adding a new call site for the service, there are two options. Either it will use an existing method on the plugin interface, and all existing services will work. If a new concept needs to be added to the plugins, then every plugin will need to be considered.

Plugin registries make testing much easier.

Plugin registries provide an easy dependency injection method. If a plugin is not under test, simply register a "no-op" version of the plugin that does nothing or provides objects that do nothing. If something shouldn't be called, simply provide objects that throw exceptions when they are called. Because each of the call sites are no longer responsible for managing a fraction of the service, the tests can now focus on testing the logic around the call sites, instead of partially testing whether the correct service was used.

Avoid plugin registries when one of the dimensions is size one

Registry configs are great for reducing dimensionality. But what if there is either just a single service, or just a single call site? Then it would be overkill to make the full class and interface hierarchy. If there is just one call site, then write the basic switch statement or if/else chain. If there is only one mapping type that is being shared across a few call sites, then refactor it into a map or a helper function. The full plugin architecture is only useful when managing the complexity of many services used at many call sites.

Simple software engineering: Inject dependencies when possible

Why should dependencies be injected?

Code should not instantiate or otherwise access its own dependencies. Instead, prefer to pass in dependencies as arguments.

This should be done when code becomes important enough to unit test. Dependency injection makes it easier for tests to provide dependencies with different configurations. It also makes it easier to inspect the side effects introduced onto dependencies. It will also make maintenance easier because it will increase flexibility at little cost.

When working with I/O, pass in interfaces instead of instantiating classes

// Avoid new()ing the dependency
public function getDatabaseConnectionInfo(): ConnectionInfo {
    $redis = new Redis();
    return new ConnectionInfo(
        $redis->get('host'),
        $redis->get('port')
    );
}

// Prefer to pass the dependency in
public function getDatabaseConnectionInfo(
    Redis $redis
): ConnectionInfo {
    return new ConnectionInfo(
        $redis->get('host'),
        $redis->get('port')
    );
}

// Passing in an interface is even better
public function getDatabaseConnectionInfo(
    KeyValueInterface $key_value_store
): ConnectionInfo {
    return new ConnectionInfo(
        $key_value_store->get('host'),
        $key_value_store->get('port')
    );
}

Dependency injection makes testing easier.

The unit test can take a trivial in-memory store instead of either wrangling a Redis instance or mocking a constructor.

Additionally, code that manages its own dependencies can become nested deep within other code. If the first example above became deeply nested, it would be unclear that the corresponding test must manage a key/value store. This could lead to bad surprises like tests attempting to connect to Redis.

Dependency injection increases flexibility at little cost.

In the example above, it may become necessary to introduce a caching layer in front of the key/value store. This is easy with dependency injection. Just make a new class that inherits the interface and delegates to the I/O layer on cache misses. Without dependency injection, it becomes a project to find and fix all usages.

Dependency injection makes it easier to make application-wide changes.

In the above example, it may become necessary to stop using the default Redis database and configure which Redis database should be used. If the Redis class is instantiated in many places, this becomes a sizable effort. Compare this to changing the single invocation that is injected throughout the application. The latter will usually be much easier.

Pass the result of I/O into business and presentation logic

// Avoid performing I/O in business logic
public function isShopTemporarilyClosed(
    ORM $orm, int $shop_id
): bool {
    // All shops manually turned off by the owner
    // are called temporarily closed.
    $shop = $orm->getFinder('Shop')->findById($shop_id);
    return $shop->is_off
        && $shop->owner->id === $shop->disabled_by_user->id;
}

// Prefer passing the result of I/O into business logic
public function isShopTemporarilyClosed(
    Shop $shop
): bool {
    // All shops manually turned off by the owner
    // are called temporarily closed.
    return $shop->is_off 
        && $shop->owner->id === $shop->disabled_by_user->id;
}

Business/presentation logic should not be overly opinionated

Applications often have several choices about where they can read equivalent data. This function shouldn't care that the data came from the ORM. Why couldn't it be passed in the POST data of a request? Or be fetched from a REST API? Ideally, logic that acts on a shop model should work anywhere.

Doing I/O in application logic makes its callers difficult to refactor

As a codebase grows, helper functions may acquire dozens or hundreds or thousands of callsites. They may become nested deep within the application call stack. It will be used within business-critical logic that will run into scaling problems. Manually managing dependencies makes it difficult to perform some optimizations. For example, it's difficult to ensure that the program never makes redundant I/O calls when the object is accessed via I/O dozens of times in a codebase. This is true even with caching! It's often the case that calling two different I/O entry points (or the same entry point with different arguments) can produce the same results. This can be difficult or impossible to programmatically detect, even though it may be obvious to the application developer.

Passing in the result of I/O makes it easier to share the result of I/O among different callers.

I/O introduces nondeterministic behavior
It would be surprising to see a DatabaseReadException when calculating whether a shop is closed. But introducing I/O into a call increases the risk that code can throw exceptions for nondeterministic reasons like service availability.

I/O also dramatically affects timing metrics. Let's say that I/O calls are cached, and the shop is fetched in two places: once while deciding which views to render, and once while rendering the view. Later, a programmer realizes that they don't need to perform the first fetch. They remove it. This will move the I/O call from the application logic into the view logic, causing the view logic's timing instrumentation to increase. This is because a former cache hit is now a cache miss with I/O fetching. No regression happened, but the application performance graphs make it seem like one did.

This could also cause tests to become flaky if they actually perform the I/O and sometimes fail.

Instead, prefer to centralize or share logic related to I/O. The details of this will depend on which languages and libraries are used.

Don't access static or global state within business or presentation logic

// Avoid accessing global or static state in business logic
public function isLocaleEnUs(): bool {
    return strtolower($_REQUEST('locale')) === 'en-us';
}

// Pass global or static state into business logic
public function isLocaleEnUs(string $locale): bool {
    return strtolower($locale) === 'en-us';
}

Accessing static and global state is unsafe across machines

A helper that access static or global state makes strong assumptions about what happened on the machine prior to the code executing. Note: this refers to static state - the reliance on information from the execution environment, or calculated data that is stored statically. Accessing static data or static functions isn't included in this.

If code lives long enough, it will eventually execute in several layers of the same application stack. Think about all the different application architectures that can exist in the same company at the same time. Reverse proxies in front of long-lived application servers, CGI scripts, batch processing jobs, monoliths, microservices, single-page applications, mobile apps, serverless lambdas, server-side rendering, etc. And to add another dimension, there are quite a few transport mechanisms available: HTTP, RPC, IPC, etc.

As code becomes longer-lived, it will eventually live within several layers at the same time. This introduces unnecessary complexity on each of the additional layers. If some logic directly reads the request parameters to determine the locale, that it (and every thing that ever depends on it) will always need to execute within an HTTP request. Or it must fake the HTTP request environment when it is included in a layer without HTTP. Or if it's proxied within HTTP, the proxied call will also need to forward the request parameters, even if it doesn't make semantic sense.

How to move an existing codebase towards dependency injection

This can be done incrementally. For each commit that uses a dependency, refactor that dependency to come from one layer higher in the stack. This is a good opportunity to introduce tests for untested code, or simplify tests for existing code.

Over time, frequently-modified code will become fully implemented using dependency injection. It may be necessary to do a special project to modify, replace, or delete code that hasn't been touched in years. But maybe it's fine to just leave it. After all, it hasn't been modified in years.

Failing really fast - Week 3 of learning about business

Last week

I wanted to test the idea that I could sell trivia night question/answer sheets for one-off events. I made a set of 18 Super Bowl questions and answers along with a set of rules. I was just waiting for Stripe to approve my application so that I could accept payments.

This week

I had a cold starting on Wednesday, and didn't start feeling better until the weekend. That means that I didn't start making progress until Saturday.

Stripe was still reviewing my application when the weekend started. I looked into other payment methods that can be integrated with Squarespace. They also accept PayPal business accounts. I happen to have one of these!

So I made a small sales site basically designed to try to just sell the Super Bowl trivia sheet that I set up. I then set up and started tweaking three Google Ads to get a cheap CPC while still showing up on qualifying searches. I haven't had any sales since then. Here are the numbers

Impressions1806Clicks81Clickthrough Rate4.49%Visits144Conversions0Conversion rate0%Cost per click0.50

OK! So, some positives and some negatives here. Let's start with the negatives. The Google Ad console says it best:

Low quality score component. Landing page experience: below average

Landing page experience: below average

Harsh but fair, Google! I have a few ideas for what the contributing factors are for not getting any sales. I think that all of them are at least partially right.

First, asking for somebody to pay money on an unfamiliar site that hasn't demonstrated any value is asking a lot. In the past few months I've done a lot of reading on how SaaS businesses operate. Their primary goal is to get an email address that they can use to send marketing material. For instance, "Sign up for this 6 email course on getting a high-quality email list" or "give us your email, and in exchange we will send you this guide to having a killer landing page." And the idea is that by having the email address, you have a way to keep reminding them that you exist. Plus, they are more likely to convert at the end of the 6 email course (since you have hopefully given them information that they find valuable). It's not clear to me if this extends more to a consumer-minded market.

Second, the price is likely a factor. I was charging $14.99 for the trivia night. My theory was that it replaces 2+ hours of hunting down questions, and another hour of setting the rules. If the person valued their time at all, it would be worth it for them to buy the pre-bundled one. I just dropped the price down to 9.99 to see if price sensitivity mattered at all. I'll report back next week! My instinct is that I'll finish out the week without any sales, but I think it's worth another $20 to find out if my conversion rate is "near zero" or "absolutely zero."

Third, this is the first landing page that I've ever tried to build. As the Google Ad said, "the landing page experience is below average." I will focus on reading resources for building good landing pages. I will also find landing pages of other small products to see how they develop the real estate they have.

It wasn't all bad! The ads I ran successfully generated clicks. In my opinion, they were also clear about what I was offering. This means that I was better at identifying a need than I was at closing a deal. This tells me that this idea isn't pure trash. Pursuing the trivia idea means that I must figure out another landing page strategy to work well with an advertisement like this.

Host Super Bowl Bar Trivia - Prepackaged Super Bowl Trivia

The most successful ad I ran at getting clicks.

Cash flow this week

It has cost me $82 to run this experiment so far. $40 for ads, $12 for the domain, and $30 for 1 month of a commerce Squarespace site.

What am I doing this week?

I have the Squarespace site for another 25 days. This means that I can save myself a second Squarespace setup fee by running a second commerce site experiment just by switching the domain. I also got my Stripe account approved, which is great! I'd rather accept Stripe than PayPal.

The most obvious second experiment that I can run is for Valentine's Day trivia. I need to do some traffic sizing to validate that this is a good subject. For the sake of this blog post let's assume that it is. My hunch is that this will be used in a school setting, as opposed to sports trivia which is destined for a bar. Keyword analysis would verify this by looking for phrases like "valentine's day trivia easy."

I will also spend time this week researching landing pages. copyhackers.com is probably a good place to start. I can also skim startupsfortherestofus.com and kalzumeus.com to see if they have any other advice. If the Valentine's Day trivia starts to pan out, I can also try to see if there are other sites that sell these kinds of materials (trivia, bingo cards, etc) to see how they set up their offering. It could be that selling printable materials is the most important thing here.

I may also see if teacherspayteachers.com allows non-teachers to sell on their marketplace. It wouldn't hurt to try to sell it through a second channel.