← Back to Articles
Security
Custom Software

Is Your Bespoke Software Secure? Common Pitfalls and How to Fix Them

Custom software can be incredibly secure — or shockingly vulnerable. Here are the most common security mistakes in bespoke business software, explained in plain English.

Is Your Bespoke Software Secure? Common Pitfalls and How to Fix Them
#software security#bespoke software#custom software security#data protection#NZ business#cybersecurity

Key Takeaways

  • 1Custom software security problems are rarely dramatic hacks — they're usually simple gaps like weak passwords, missing access controls, or outdated software components.
  • 2Role-based access control (giving people access to only what they need) is one of the most important security measures, and it's frequently skipped in early builds.
  • 3Unencrypted data and missing audit logs are two gaps that are easy to prevent during development and very expensive to discover after a breach.
  • 4Most security vulnerabilities in bespoke software come from developers prioritising features over security foundations — it's worth having an explicit security conversation early.
  • 5You don't need to be technical to ask the right security questions — this article ends with a checklist of exactly what to ask your developer.

If you've had custom software built for your business — a client portal, an internal system, a booking platform, a job management tool — you probably don't spend much time thinking about whether it's secure. It works, your team uses it, it does what it's supposed to do. Security feels like someone else's problem.

The uncomfortable reality is that security gaps in bespoke software are common, often invisible, and usually only get discovered in the worst possible way: when something goes wrong. This article isn't meant to scare you — it's meant to give you the knowledge to ask the right questions before something goes wrong.

None of this requires technical expertise. Think of it like asking the right questions when a builder is renovating your office. You don't need to know how to wire a fuse box to ask whether the electrician is licensed.

Pitfall 1: Everyone Has Access to Everything

Imagine giving every person in your office a master key that opens every door, every filing cabinet, and every safe — including the ones they have no reason to access. That's what software looks like when it has no role-based access control.

Role-based access control (RBAC) means each user can only see and do what their role requires. Your receptionist might see client contact details but not financial records. Your sales team might see their own pipeline but not the payroll system. An administrator has full access; a read-only user can look but not change anything.

When this isn't set up properly, a low-level employee can access sensitive data they have no business seeing, a malicious insider can do significant damage, and a compromised account grants an attacker access to everything rather than one corner of the system.

What to ask your developer: "What role-based access controls are in place? Can you show me what a standard user sees versus an admin?"

Pitfall 2: No Audit Log

An audit log is a record of who did what and when — like a CCTV system for your software. "Sarah accessed the financial records at 3pm on Tuesday." "A new user account was created from an IP address in Romania." "That invoice was edited three times in the last 24 hours."

Without an audit log, you have no way to investigate suspicious activity, no way to prove what happened in a dispute, and no ability to detect a breach until the damage is already done. Many bespoke systems simply don't have them — not because they're hard to build, but because nobody asked for them.

Audit logs are particularly important in industries with compliance obligations — health, financial services, legal — but they're valuable for any business handling customer data.

What to ask your developer: "Do we have audit logging? Can I see a record of what a specific user has accessed in the last 30 days?"

Pitfall 3: Data Stored Without Encryption

Encryption is what makes data unreadable to anyone who shouldn't see it. There are two types you need to understand:

  • Encryption in transit — data is encrypted as it travels between your browser and the server. This is the HTTPS padlock in your browser. Most modern systems have this, but it's worth checking — particularly for internal tools that might have been built quickly.
  • Encryption at rest — data is encrypted on the server where it's stored. This means that if someone gains access to the database or the storage system, the data they see is scrambled and useless without the encryption keys.

Sensitive fields — passwords (which should never be stored in plain text), credit card numbers, IRD numbers, health information, personal identity details — need to be encrypted. Discovering that your customers' passwords were stored in plain text after a breach is an extremely uncomfortable conversation.

What to ask your developer: "Is sensitive data encrypted at rest and in transit? How are passwords stored?"

Pitfall 4: No Login Timeout

This one sounds minor. It isn't. If a user leaves their session logged in on a shared computer, or steps away from their desk with the system open, the next person to sit down can access everything that user can access — with no login required.

Automatic session timeouts — logging users out after a period of inactivity — are a basic control that's frequently absent from bespoke systems. For systems that handle sensitive data in environments where multiple people use the same machine (retail counters, shared workstations, healthcare settings), this is a significant risk.

What to ask your developer: "How long before an inactive session times out? Can we configure this per role?"

Pitfall 5: No Input Validation (The SQL Injection Problem)

Here's a non-technical explanation of one of the most common and dangerous security vulnerabilities in custom software: SQL injection.

Imagine your software has a search box where users can type a client name to look them up. Behind the scenes, the software takes what you type and uses it to search the database. If the software trusts whatever you type without checking it, a malicious user can type specially crafted text that tricks the database into running commands instead of just searching — like unlocking the filing cabinet and handing you all the files instead of just finding the one you asked for.

Input validation means the software checks that what a user types is actually what it's supposed to be — a name, a number, a date — before doing anything with it. This is one of those things that should be standard practice for any competent developer, but under time pressure or with less experienced developers, it gets skipped.

What to ask your developer: "How does the system protect against SQL injection and other input-based attacks? Has it been tested?"

Pitfall 6: Outdated Dependencies

Modern software isn't built from scratch — it's built using hundreds of third-party components (called libraries or dependencies) that handle common functions. This is normal and sensible. The problem is that these components need to be updated regularly, because security vulnerabilities are discovered in them constantly.

An outdated dependency is like leaving a known faulty lock on your office door. The vulnerability is public knowledge — attackers scan for it specifically — and the fix exists, you just haven't applied it.

Software that was built two years ago and hasn't been actively maintained may be running components with dozens of known security issues. This is one of the most common and underappreciated risks in bespoke software.

What to ask your developer: "When were dependencies last updated? Is there a process for monitoring and applying security patches?"

Pitfall 7: No Backups (or Backups Nobody Has Tested)

Backups aren't exactly a security feature, but they're your last line of defence when everything else goes wrong — whether from an attack, a software bug, or human error. The question isn't just "do we have backups?" It's "how often? Where are they stored? How quickly can we restore from one? Has anyone ever actually tested restoring?"

A backup that's never been tested is not a backup — it's an assumption. Many businesses discover their backup system was broken only when they try to use it after a disaster.

For systems handling important business data, you want: automated daily backups at minimum, backups stored separately from the live system (so a ransomware attack on the main system doesn't also encrypt the backups), and a documented, tested restore process.

What to ask your developer: "How often is data backed up? Where are the backups stored? When was the last time a restore was tested?"

Pitfall 8: Weak Passwords and No Two-Factor Authentication

Compromised credentials — passwords that have been guessed, stolen, or leaked — are the most common way attackers get into business systems. If your software lets users set the password "Password1" and doesn't offer two-factor authentication (2FA), you're one stolen password away from a breach.

2FA adds a second layer: even if an attacker has your password, they also need access to your phone or email to log in. This single control blocks the vast majority of credential-based attacks.

For any system with admin access, sensitive data, or financial information, 2FA should be mandatory, not optional.

What to ask your developer: "Does the system enforce strong passwords? Is two-factor authentication available and turned on for admin accounts?"

Pitfall 9: Overly Permissive APIs

Many modern applications communicate with other systems through APIs — connections that allow one piece of software to talk to another. These connections need to be properly secured, just like a door needs a lock.

An overly permissive API is one that can be accessed without authentication, accepts more commands than it should, or doesn't limit how many requests can be made in a short period (which opens up brute- force attacks). If your software exposes an API — for a mobile app, for Xero integration, for a third-party tool — those connections need to be designed with security in mind.

What to ask your developer: "What APIs does the system expose? How are they secured? Are there rate limits and authentication requirements?"

Pitfall 10: Data Stored Where It Shouldn't Be

Sometimes sensitive data ends up somewhere unexpected — in a debug log file, in a CSV export that's left on a public web path, in an error message that displays too much information, or in a development database that has less security than the production one.

I've seen systems where full credit card numbers appeared in error logs. I've seen customer data sitting in a publicly accessible folder because someone was testing an export feature and forgot to clean it up. These aren't malicious acts — they're oversights — but they have the same effect as a deliberate breach.

Sensitive data should only exist in places that are secured appropriately. Development and testing environments should use fake or anonymised data, not copies of real customer data.

What to ask your developer: "Are there any logs or export files containing customer data? Is real customer data used in development or testing environments?"

The Tone This Deserves

None of this is meant to make you panic about your existing software. Most bespoke systems built by competent developers handle these concerns well. The gaps usually appear in systems that were built quickly under budget pressure, or maintained over many years without a security review.

The goal is informed ownership. When you know the right questions, you can have a productive conversation with your developer — not an accusatory one. "Have we got audit logging? I'd like to add it to our next sprint" is a perfectly reasonable request.

If your developer gets defensive about these questions or can't answer them clearly, that itself is useful information.

Checklist: 10 Questions to Ask Your Software Developer

  1. What role-based access controls are in place — can you show me what a standard user can and can't access?
  2. Do we have audit logging? Can I see a log of who accessed what in the last 30 days?
  3. Is sensitive data encrypted at rest and in transit? How are passwords stored?
  4. What's the session timeout setting, and can we configure it based on role or data sensitivity?
  5. How does the system protect against common attacks like SQL injection and cross-site scripting — has this been explicitly tested?
  6. When were third-party dependencies last updated? Is there a regular process for security patches?
  7. How often is data backed up, where is it stored, and when was the last time a restore was actually tested?
  8. Is two-factor authentication available and enforced for accounts with admin or sensitive access?
  9. What APIs does the system expose, how are they authenticated, and are there rate limits in place?
  10. Is real customer data ever used in development or test environments? Are there any log files or exported files containing personal data?

A good developer will answer these questions confidently and in plain English. If any answer is "we haven't done that" — that's fine, it's a starting point. Add it to your next development cycle and prioritise based on the sensitivity of the data your system holds.

Security isn't a one-time checkbox. It's an ongoing conversation between you and whoever maintains your software. Starting that conversation is the most important thing you can do.

Quick Questions

How do I know if my custom software is secure?

Honestly, you often can't tell by looking at it — security gaps are usually invisible from the outside. The most reliable approaches are: asking your developer specific questions (see the checklist at the end of this article), having a third party do a security review, and looking at what happens when things go wrong (do you get alerts? are there logs?). If your developer can't answer basic questions about encryption, access control, and backups, that's a warning sign.

Is custom software more or less secure than off-the-shelf tools?

Either can be more secure — it depends entirely on how it's built and maintained. Off-the-shelf tools benefit from large security teams and frequent updates, but they're also high-profile targets for attackers. Custom software is a smaller target, but security depends completely on the developer's practices. A well-built custom system with proper access controls, encryption, and update management can be very secure. A cheaply built one with no thought for security can be shockingly vulnerable.

What's the most common security mistake in bespoke business software?

Role-based access control is the most common gap I see — everyone in the system has access to everything, rather than people seeing only what they need. Close second is outdated dependencies (the libraries and components the software is built on), which accumulate known vulnerabilities over time if they're not regularly updated. Both are easy to prevent during development and time-consuming to fix later.

What should I do if I suspect my software has been compromised?

Contact your developer immediately and take the affected system offline if possible to prevent further damage. Change all passwords and revoke any API keys associated with the system. If customer data may have been accessed, you have obligations under the NZ Privacy Act 2020 to notify affected individuals and potentially the Privacy Commissioner. Document everything from the moment you suspect a breach — what you saw, when, and what actions you took.

Free Assessment

Discover Your Automation Potential

Take our 2-minute quiz to find out how much time and money you could save. Get personalised recommendations for your business.