The Orbtronics Quality Cycle
Quality Assurance (QA) is an integral element in any software development lifecycle, often serving as the last line of defense before a product goes live. Many development teams employ a unique blend of form-based testing strategies coupled with comparison against Google Lighthouse metrics for comprehensive coverage of performance, search engine optimization, security, and responsiveness. This article aims to shed light on one of the best technical approaches to QA, detailing the structured forms development teams should employ when testing and how Google Lighthouse metrics can be integrated into your workflows.
The Orbtronics Quality Cycle is systematically segmented into four major phases, conducted in the following order:
- Feature Verification Checks
- Environment Compatibility Assessments
- Web Quality Metrics & Tuning
- Speed and Stability Evaluations
Each phase has its distinct role and complements the other, paving the way for a robust and thoroughly vetted application, with the final phase (Speed and Stability Evaluations being a part of Web Quality Metrics & Tuning)
Feature Verification Checks
Here, we rigorously validate the application’s functionalities.
Go through the Functionality Check:
We test the website/application against the following checklist:
- Identify recently changed features/modules.
- Determine features/modules impacted by recent changes.
- Execute existing test cases for impacted features.
- Verify application behavior with previous test results.
Boundary Value Testing:
- Identify functions that require inputs.
- Determine minimum and maximum valid values for inputs.
- Test boundary values and slightly beyond boundaries.
- Document input handling behavior.
- Identify areas more prone to errors based on experience.
- Test known problematic areas and scenarios.
- Verify error handling and application behavior.
- Identify any recently changed features/modules, making note of them in a Changelog.
- Determine any features/modules impacted by recent changes, and make note of them.
- Execute existing test cases for the impacted features.
- Verify application behavior with any previous test results, comparing them with previously documented results if there are any.
Boundary Value Testing:
- The team identifies any functions that require inputs, eg. forms
- Determine the minimum and maximum valid values for those inputs.
- Following this, we test the boundary values. Using extremely low and extremely high input values, we can observe how the program handles them. Are these values within the range that the application’s input is meant to receive?
- Finally, document the input handling behavior. If the fields accept input that is outside the desired range, then this must be noted in the changelog, where the acceptable range is detailed and the input’s acceptance of an invalid range is also noted.
- Identify any areas that are more prone to errors using intuition and experience to identify features or segments of the application or website that one would expect to produce an error.
- Test those known/suspected problematic areas and scenarios. Finally, verify the error handling and the application’s behavior following this vetting.
Environment Compatibility Assessment
We then have to perform compatibility tests in order to ensure seamless operation of any website or application we build across a spectrum of hardware and software environments.
- Environment Parameterization: We specify target Operating Systems, Browsers, and Hardware configurations that the website or application will need to run on.
- Cross-Browser Compatibility: We then test whether the application/website works and displays as intended on the following browsers: Google Chrome, Mozilla Firefox, Safari, Microsoft Edge. If any feature or design element does not display as intended on one of the browsers, then address this by using browser-specific prefixes for CSS, or by using tools like autoprefixer.
- Device Compatibility: We test the application or website on Phones, Tablets, and Desktops, ensuring that interactive elements are easily interactable. We also endeavor to test on different operating systems if possible.
- Resolution & Display Compatibility:
- Test whether the application/website displays on various screen resolutions as expected, and whether page text size scales appropriately.
- Ensure that colours are distinguishable and maintain contrast ratios between elements so that they are not too difficult for some users to read.
- Use SVG images instead of JPEG and PNG for vector illustrations and vector graphics since they are more performant, less lossy and scale better.
- CSS & Layout Compatibility: Ensure that all browsers fully support all CSS features that used in the project. This is important since not every browser supports the latest CSS features immediately.
- Language & Global Compatibility: For accessibility, we ensure that character encoding is set correctly (UTF-8 recommended).
- Media Compatibility:
- All videos should be in WebM or MP4 format, preferably WebM, to improve page load speed.
- Use WebP-format for photos and images instead of JPEG and PNG.
- Third-Party Integrations: Following this, you need to test all plugins and third-party scripts in order to ensure that they function correctly across all targeted browsers and devices.
9. Security: Finally, we must ensure that HTTPs is used across the site for secure connections.
Web Quality Metrics & Tuning
In this final phase of testing, we evaluate a static website or dynamic web application’s performance metrics such as load time, responsiveness, stability, and fitness for launch.
- Run the Google Lighthouse Audit: Once you run the audit, you receive scores between 0-100 for four different categories of website quality. The categories consist of Performance, Accessibility, Search Engine Optimization, and Best Practices. Lower scores are highlighted in red, suggesting areas that urgently require improvement.
- Metric Identification: Utilize Google Lighthouse to identify Key Performance Indicators (KPIs) like Performance, Accessibility, Best Practices, and SEO. A benchmark standard is to score at least 80 in each category, aiming for at least 90 in each if possible.
- The following are descriptions of 9 critical areas that require strong scores for a high-quality user experience on any site or application:
- First Contentful Paint: Time taken to render the first piece of DOM content.
- Speed Index: Measures how quickly content is visually displayed during page load.
- Time to Interactive: The time it takes for the page to become fully interactive.
- ARIA Landmarks: Use of ARIA landmarks to identify regions of the page.
- Image Alt Attributes: Checks if all images have alt attributes for screen readers.
- Meta Description: Presence of meta description for SEO.
- Status Codes: Checks for any problematic HTTP status codes.
- HTTPS: Checks if the page is served over HTTPS.
- Implement Optimizations:
- Speed and Stability Evaluations:
- Ensure that the WebP format was used where supported and optimize and compress images: PNG, JPEG and WebP Compression
- Lazy-load images where possible: Guide to Lazy-Loading Images
- Pre-load Critical Assets by using the <link rel=”preload”> element and attribute to speed up page rendering.
- Speed and Stability Evaluations:
- Accessibility Optimizations:
- Ensure that we use semantic HTML5 tags and ARIA roles to improve compatibility with screen readers.
- Also ensure that there is sufficient contrast between text and background colours.
- Accessibility Optimizations:
- SEO Optimizations:
- Make sure the layout is mobile-friendly. This can be tested by using other devices such as mobile phones, tablets, laptops and desktop screens. Alternatively, or in conjunction it is recommended that a Responsive Viewer is used to view the site or web application in multiple different screen sizes at once. A Responsive Viewer extension can be downloaded via the following URL: Google Chrome Responsive Viewer Extension
- We also use schema markup in metadata sections to provide search engines with additional information about the site’s content. You can validate your Schema Markup here: Schema Markup Validator
- Best Practice Optimizations:
- Implementation of strong security headers are employed in order to protect the site against cross-site scripting or data injection attacks.
- Following this, it can be recommended that we implement a Service Worker in order to enable offline usage of the website or application and improve subsequence performance. How to Add a Service Worker to a Website
- We may also add a jsonfile to specify how a web application should behave once installed on a user’s device.
- Utilization of the HTTP/3 protocol, which is the latest security standard for web requests and responses. More About HTTP/3
- Performance Analysis: Finally we compare Lighthouse scores before and after performance optimizations. If any scores fall below 80, then the optimizations must be revisited. Sometimes we may have no choice but to sacrifice a few points from a metric, but overall at least 3 metrics should be at a rating of 90, or 80 at minimum.
Software Pre-launch Checklist
Final assurance check integrating all the prior testing dimensions serves as the capstone of the Software Pre-launch Checklist. After conducting meticulous Functional Tests, Compatibility Tests, and Performance Tests, as well as optimizing based on Google Lighthouse Measurements, we arrive at this crucial juncture. The objective is to ensure that every element of the software has not only been individually vetted but also works cohesively in an integrated environment.
Only when the software passes this comprehensive, final assurance check do we consider it ready for launch. The aim is to mitigate risks and ensure that the software delivers a reliable, robust, and user-friendly experience right from its first public release.
- Comprehensive Checklist: Elements are collated from the Functional, Compatibility, and Performance checklists.
- Regression and Debugging: Full regression tests are conducted, utilizing Google Lighthouse metrics for debugging and performance optimization.
- Employ Google Lighthouse Optimizations: Measurement of metrics such as load time, responsiveness, application stability, and fitness for launch are done, and recommended optimizations are applied until an acceptable score of at least 80 on each metric is met.
- Validation and Approval: Final validation against project requirements and obtain stakeholder approvals must be achieved before publication.
Metrics for Evaluation
- Cumulative Requirement Coverage
- Overall Google Lighthouse Scores
- Go/No-Go Status based on predefined criteria
This form-based approach ensures a step-by-step, thorough vetting of each software product, substantiated by data-driven Google Lighthouse metrics. By adhering to this structured methodology, we not only achieve robust functional compliance but also optimize performance and assure compatibility, delivering websites and software that stand up to rigorous quality standards.