We begin each project by planning how we will operationalize it. As much as possible, we want to have the plan in full view for our post-launch process. This is essential to launching and managing complex applications and sites at scale.
During our initial discovery stage, we will work with your team to build a plan and process for managing the launch and post-launch operations. Our discovery will look at resources, both technical and organizational, as well as expected load and growth. As we build the project, we will keep the operational plan in mind, revising it as the project matures.
Our project management team runs point on every project. They are the interface layer between our internal teams and the project stakeholders. They build the roadmap, keep the project on schedule, handle new requests and bug reports, manage the QA, and supervise the releases.
All projects change as we start building. We want our development to be responsive to the evolution of the business needs. We follow an agile methodology, allowing us to quickly adapt to changes. When we start a new project, we bring the project stakeholders into our project management workflow, so they have a clear sense of the scope, the current sprint, and the roadmap. As new requirements arise, our PMs will work with the dev and ops teams to slot them into the schedule, while balancing costs, time, and system dependencies.
Our PMs are also the first users of the applications we build. They perform extensive testing, verifying the application performs to specification, across browsers and devices. When issues are detected, either in QA, via automated monitoring, or reported from users, our PM team is our first line of triage, routing the request to the right party, documenting the issue, and verifying the solution.
At the end of the discovery and design process, we will do an assessment of the expected production infrastructure requirements. These requirements will be modeled based on the technology stack, project requirements, and expected load. We will do this assessment with the project and technical stakeholders, to make sure the technology and infrastructure is a sustainable long-term solution.
At Convertiv, we have worked hard to acquire expertise across a wide range of infrastructure tools and providers. We try to stay somewhat agnostic to specific stacks or tools, looking to deploy tools that are best suited to the the challenge at hand. We have a number of infrastructural and operational providers with whom we have long-term partnerships. We also regularly work with hybrid cloud solutions and on-premise infrastructure when organizational and security concerns dictate such an approach.
DevOps is the hand-off between ongoing development and day-to-day operational support. For each type of project we have a process and a set of tools to manage this hand-off in a transparent, testable, and repeatable way.
Early in development, we stand up a Continuous Integration/Continuous Deployment (CI/CD) pipeline. This pipeline allows development to flow smoothly from our development environments, through the QA process, and eventually into production. Each step is monitored and tested, while also producing a comprehensive audit trail. Database and infrastructure changes are also revisioned in the code, allowing any developer to quickly understand the entire project design and limiting the risk of hidden patterns.
In the event we have an issue with a release, every release contains the logic for non-destructive rollback. Our DevOps team can roll back the changes and review the issue with the development team. They can then revise the release to resolve the issue.
Building and managing large, mission-critical applications is complex. Even minor disruptions will have significant business impacts. Instrumentation gives us real-time, ongoing visibility into the application stack. When the system enters an exceptional state, we want to be notified of it and provide our developers with the complete event log. We will choose and tailor the tools to the project technology and team resources.
Depending on the application needs, we deploy four levels of instrumentation:
Building and maintaining secure apps is a constant process. We have a strong set of tools, best practices, and patterns we deploy to help create and maintain secure systems, but at the heart of all security is process. We are constantly building and refining our security processes, building on what we learn as the security landscape evolves.
At the start of an engagement, our security experts look at the project scope and build a security threat model. Our goal is to create a clear picture of the attack surfaces of the application and the risks we will face. Once we have a risk model, we can have a conversation with technical and legal stakeholders to balance the competing factors around our security processes.
While all the systems we build are designed to conform to best security practice, not every application should be held to an identical level of scrutiny. For instance, applications that handle secure financial transactions must pass a PCI-DSS audit. PCI-DSS auditing and remediation are expensive and time consuming, though they clearly provide a significant security benefit. A public-facing, informational site wouldn’t need to be held to that level of scrutiny.
If, in conversation with these stakeholders, we decide a certain level of scrutiny is required, we will work with internal and external auditors. We regularly subject our code to both static code analysis and dynamic (penetration) testing. We will build the application on the latest best practices, with built-in technology designed to defeat known attack vectors. Our infrastructural partners deploy infrastructural security systems, firewalls, and monitoring, to detect and respond to emergent threats. We engineer our systems to have redundancy and disaster recovery plans which allow us to recover in the event of an attack.
Security is an ongoing process, in conversation with our clients and partners.
Documentation is the evidence of well-written software. Inadequate documentation will increase error rates, elongate development cycles, slow response times, and ultimately shorten the life of the application: all while increasing costs.
Our documentation efforts begin at the ground floor, in the structure of the code itself. We use established, normalized, testable patterns in all our systems. For example, our APIs follow RESTful patterns and MVC structures. Our front-end design uses an Inverted Triangle for naming and structuring style definitions. Following well-established patterns makes it easy for experienced developers to jump in and start contributing without having to accustom themselves to a radically new system.
As the code takes shape, our developers document within the code, explaining complex sections, documenting custom functions, and especially calling attention to project-specific deviations from the established pattern. All our code repositories have documentation in them that details how to stand up the local repository, fetch required dependencies, build the code, and deploy via the CI.
As we approach an operational state, we centralize the documentation into a deliverable. Depending on the project, we may deliver style guides, API specifications, build instructions, and/or code SDKs.