2.6 Quantexa Upgrade Guide
Quick Upgrade Overview This article provides a practical review of the Quantexa 2.6 upgrade, highlighting key areas to watch out for based on collective project experience. It should be used as a strategic companion to the official technical guides. Official Migration Guide: 2.5→ 2.6 Upgrade Migration Guide Release Information: Community Release Announcement | 2.6.0 Release Notes The 2.6 upgrade has several non-negotiable prerequisites. Before beginning, it is essential to confirm that your project has already completed the following: All Data Sources on Data Fusion: If you have any data sources on the legacy Lenses framework, they must be migrated first. The full process is detailed in Migrating to Data Fusion. Scoring on Assess Framework: For projects still on Scoring Framework 1.0 (SF1), migrating to Assess is mandatory. This is not a direct technical conversion; it requires upfront design work to identify the right course for your project. The outcome could be to adopt modern Detection Packs if there is a good fit with your existing scoring logic, or it could be to re-implement your logic as custom scores within the Assess framework. It is strongly recommended to discuss your approach with a Quantexa Architect to determine the best path forward. For guidance on the technical migration, see Migrating from Scoring Framework 1.0 to Assess. Fusion-Compatible Data Packs: Ensure all Data Packs used by your project are the modern, Fusion versions. Check the Data Pack Compatibility Matrix to confirm your versions are compatible with Quantexa 2.6. Community Upgrade Guidance Core Product Changes Delta Lake Configuration: A straightforward configuration change in the spark-submit command. The key consideration is ensuring the Delta Lake version in your dependency-versions.gradle file is compatible with your Spark version, which can be verified on the Delta Lake release page. Explorer-*.json Configs: Simple manual edits are required to align with updated JSON schemas in some Batch Resolver configuration files. Batch Resolver Changes: A straightforward manual config change in reference.conf. The resolver now generates more output files; while a compatibility script is provided, it may be cleaner in the long run to update any downstream test code to handle the new format directly. Graph Script and Assess Imports: This is handled automatically by the repository tool, which refactors the library imports. Assess Changes: This area typically requires significant attention. The Batch Resolver data models have changed, which has a direct impact on Assess. If your scoring logic uses Entity Attributes, expect to refactor custom steps, as some fields have been removed or had their data types changed. Other Migrations Java 17 Upgrade: This is a significant undertaking that extends beyond the codebase. It requires upgrading the JDK across all environments: local developer machines, Docker images, CI/CD runners, and the production Spark clusters. It is crucial to engage with platform and infrastructure teams early to plan this. Gradle 8 Upgrade: The migration to Gradle 8.4 can be complex. An efficient approach is to generate a clean project using the 2.6 Repository Tool and use its working Gradle setup (build.gradle, settings.gradle, etc.) as a reference for your own project. LiteGraph to ScoringGraph Migration: A key migration with both automated and manual steps. This is a worthwhile effort, as ScoringGraph unlocks significant new functionality, most notably support for Entity-to-Entity edges and compatibility with the new Attribute types from the updated resolver configs. The repository tool handles much of the refactoring, but manual intervention is still required. Following the official Migrating to Scoring Graph guide closely is recommended. Alert Scorecard Migration: It is critical not to overlook this one-off data migration script for historical alert data, which is necessary to ensure re-alerting performs correctly once upgraded. Refer to Migrating from Alerting 2.5.x to 2.6.x for detailed guidance. Task View table Migration: This migration updates the database schema for the Task Data visible in the UI, typically by adding two new columns. The process depends on your database technology. For RDBMS systems, this involves running DDL migration queries. Crucially, these DDL scripts are not included in the main dependency bundle; they are provided in a separate ZIP file that must be downloaded from the Quantexa artifact repository. Additional Information For more general best practices on topics such as setting up development strategy, testing your migrated code, and releasing your upgrade, please refer to the other articles within the Upgrade Platform Library on the Quantexa Community site.38Views0likes0CommentsAdopting Graph Scripting QSL
As of versions 2.7.18 and 2.8.2, Graph Scripting QSL is the new, generally available standard for batch-based Network Generation, replacing the now-deprecated Graph Scripting DSL. The benefits of migrating to QSL are significant, including major performance gains, simplified low-code development, and alignment with the future direction of the Quantexa Platform. For a full overview of these benefits, please see our introductory article: Introducing Graph Scripting QSL: Faster, Smarter Graph generation capabilities. This article is the practical follow-up, designed to help Technical Leads and Developers plan and execute the migration from DSL to QSL. When to Plan Your QSL Migration With DSL now deprecated, all projects using it must plan for a migration. The timing of this migration depends on your project's current state: Project Status Recommendation New Projects All new projects starting on platform version 2.7.18+ or 2.8.2+ should use QSL by default. Existing Deployments (pre-v2.7) The migration from DSL to QSL should be planned as a key activity within your v2.7 (or later) upgrade project. Existing Deployments (on v2.7+) The QSL migration can be prioritized and executed as a standalone project, independent of a major platform upgrade. Migration Approach and Effort Planning The official DSL to QSL migration process involves setting up a new, clean QSL configuration and then re-implementing your existing DSL expansion logic using the new path-based syntax. This is not an automated, in-place conversion. When planning for this effort, use the following as a baseline estimate: Core Implementation: On average, projects require ~6 days of effort to perform the initial QSL setup and re-implement 1-3 expansion graphs, including initial development testing. Additional Expansions: The effort does not scale linearly. Budget an additional 0.5-1 day of effort for each additional expansion path you need to migrate. Formal Testing: Budget an additional 5 days for a full regression testing cycle. This allows time to thoroughly validate the graph outputs and investigate any differences. Historically, reported differences have typically been the result of rectifying undesirable DSL behavior rather than issues with QSL itself. Key Considerations During Migration As you plan your migration, be aware of these key technical differences between DSL and QSL: Single Scoring Graph: QSL generates a single ScoringGraph. If your multi-use-case setup relies on different graphs for different use cases, you may need to configure multiple, independent graph-scripting modules. Path-Based Expansions: QSL's path-based expansions are more precise than DSL's perimeter-based approach. For very complex DSL graphs, this may require you to define your logic as a larger number of more specific QSL paths. Attribute Availability: Attributes not explicitly used in an expansion are not automatically carried through to scoring traversals. Review the migration notes on attributes to ensure your scoring logic has access to the data it needs.32Views0likes0CommentsData Packs: How To Remove Customisations During an Upgrade
If your project has been running for some time, it is likely that you have had to customize a Quantexa Data Pack to meet specific requirements. Historically, this required "forking" the Data Pack repository and maintaining your own version of the code. While this approach provided flexibility, it came at a significant cost: Increased Technical Debt: Your team became responsible for maintaining a growing fork of custom code. Higher Upgrade Effort: Every platform upgrade required you to manually migrate and re-test your forked Data Pack, a complex and time-consuming process. Missed Opportunities: Your forked version did not benefit from the continuous stream of new features, performance improvements, and bug fixes being added to the official Quantexa Data Pack releases. Starting with platform version 2.6, Fusion Extensibility provides a powerful new model that allows you to apply the most common customizations without forking the code. This is a fundamental shift that enables you to stay aligned with the official Data Pack releases while still meeting your unique project needs. For more details on this framework, see the official Fusion Extensibility documentation on the Documentation Site. The Opportunity: When to Migrate Your Customizations If your project is on version 2.6 or later and you are still maintaining a forked Data Pack, you have a strategic opportunity to eliminate this technical debt. However, because migrating from a fork to the official Data Pack will introduce significant (and desirable) changes to your ETL output, it is strongly recommended to treat this migration as a standalone project, separate from a major platform upgrade. Attempting to do both at the same time makes it extremely difficult to perform root cause analysis. If you see a change in your data, is it because of the platform upgrade or because of the Data Pack migration? By separating the two projects, you can isolate the variables and test each change independently, leading to a much smoother and lower-risk process. The Migration Process: From Fork to Extensibility The migration process involves a careful review and triage of every customization you have made. Step 1: Review Your Forked Data Pack Go through your forked repository's commit history and create a detailed list of every functional change you have made compared to the original Quantexa version. Step 2: Triage Each Customization For every single customization you have identified, you must now make a critical decision. Ask Yourself... If the answer is YES... If the answer is NO... 1. Is this customization achievable using the new Fusion Extensibility patterns? Great! Your action is to plan to re-implement it using the new model. Go to question 2. 2. Is this customization still a critical business requirement? Your action is to acknowledge that you must remain on your forked version for now. You should then raise an enhancement request in the Quantexa Product Roadmap to have this capability added to the core product in the future. This will unblock your migration later. Excellent! This is an opportunity to simplify. Your action is to plan to discard the customization and use the standard, out-of-the-box Data Pack functionality. Step 3: Execute Your Migration Plan Based on the outcome of your triage, your path is now clear: If all of your critical customizations can be achieved via extensibility or have been deemed no longer necessary, you can proceed with re-implementing them and decommissioning your forked repository. If even one critical customization cannot be achieved, you must pause the migration, raise the necessary enhancement requests, and continue to maintain your fork until the product supports your needs. The Testing Strategy: Focus on Intent, Not Identical Output This is the most critical concept to understand when testing your migrated Data Pack. You should NOT be testing for identical output. Your old, forked Data Pack is likely behind the latest official version. The new version contains numerous bug fixes, functional improvements, and performance enhancements that will naturally and correctly lead to different ETL output. Your testing strategy must instead focus on validating the intent of your original customizations. If your customization was to... Your new test should verify that... Add a new "Risk Score" derived field... The "Risk Score" field is still being correctly calculated and added to the model, even if other parts of the record have changed due to product improvements. Exclude compounds based on a specific pattern... The correct compounds are still being excluded based on your custom logic. Add a new "Source System ID" attribute to an Entity... The "Source System ID" attribute is still present on the resolved Entities with the correct value. By adopting this testing mindset, you embrace the benefits of the new Data Pack version while ensuring your critical business logic remains intact. The result is a dramatic reduction in custom code, a simpler and faster future upgrade path, and immediate access to the latest Quantexa features.20Views0likes0Comments2.7 Quantexa Upgrade Guide
Quick Upgrade Overview The 2.7 Quantexa Upgrade consists of three main parts: Core Product Changes Removal of Quantexa Incubators Data Packs Migration Most of the Core Product changes are automated migrations and minor adjustment which can be tested in a local environment (unless your project has Data Streaming, Entity Store, or Graph API configured). Migration to Delta Lake is going to be the biggest component but it is also assisted by automated migrations and some of the required effort can be avoided (please see below). Quantexa Incubators will no longer be released in 2.7. Some of the utilities previously released as part of Incubators moved to the Core Product code, the rest will still be accessible (e.g. for code forking) in the 2.6 Quantexa Incubators release. It is expected that most effort will be consumed by the Data Generator migration, however, functional changes can be deferred if there is a strong need to minimize the upgrade timeline. The bulk of the Upgrade effort comes from the recommended regression testing of functional changes introduced by the Data Packs related migrations. Although the recommendation is to perform all available migrations, those functional Data Packs changes can be deferred if faced with severe time constraints (please see below). This page aims to provide additional guidance related to the 2.7 Quantexa Upgrade, for the full list of required migration steps, please refer to the Documentation site migration guide: 2.6 → 2.7 Upgrade Migration Guide. Release Notes: Community Release Announcement 2.7 Release Notes Community Upgrade Guidance Core Product Changes Migration to Data Lake This component is assisted by automated migrations and should require fairly simple configuration adjustments. It is recommended to perform a full end-to-end run once the migration steps are finalised. A significant part of the effort comes from handling metadata files, especially ones existing already in Production environments. A script is provided which converts legacy metadata Parquet files to Delta Lake format. However, that step is only required if an incremental mode iteration persists from pre-2.7 batch runs, as this requires information from existing metadata. If completing a full ETL batch run following an upgrade to 2.7, this step is not required, and a metadata.delta file is created automatically, provided the initial migration steps have been completed. Removal of Quantexa Incubators ETL validation tools migrated into the platform Elasticsearch snapshot scripts migrated into the platform ETL test utilities migrated into the platform Data generator core moved into Project Example Dynamic Graph Script utilities Graph Script REST API starter Spark, Scala, and Test Analytics utilities SparkTestSuite From 2.7 Quantexa will stop releasing the community repository Quantexa Incubators. Components that have been accepted as best practice have been migrated into the Core Product. All previously released utilities will remain accessible (e.g. for code forking) in the 2.6 Quantexa Incubators release. Most of the above are straightforward migrations with no functional changes and can be performed and tested in a local environment. Data generator core moved into Project Example In 2.7 Data Generator moves into the Core Product where it will benefit from General Availability support. All projects are strongly recommended to perform this migration (please consult your Quantexa Architect if you are considering omitting it). Dynamic Graph Script utilities This will apply to very few projects that do not follow the typical Batch Resolver + DSL network generation process. We recommend consulting your Quantexa Architect if this is the case to ensure this migration is well executed and in line with Quantexa's Best Practice. Data Packs Migration 2.7 Data Packs come in two flavours, either with or without functional changes. For the full description of functional as well as non-functional changes, please refer to the Data Packs Release Notes. In order to fully benefit from the improvements introduced in 2.7 Data Packs, we recommend opting for the version containing functional changes. However, projects should be mindful that doing so will extend the overall timeline of the upgrade. The additional effort will vary between projects, depending on the available regression testing setup, the ease of performing end-to-end batch runs, the need for model governance, etc.. Release 2.7_1.3 This is a release of Data Packs that is compatible with Q2.7 and Parsers 3. This release will contain no new functionality compared to the 2.6_1.3 release and therefore provides projects the option of performing no/limited regression testing when upgrading. Release 2.7_2.1 This is a release of Data Packs that is compatible with Q2.7 and Parsers 4. This release contains no new functionality compared to the 2.6_2.1 release and therefore provides projects the option of performing no/limited regression testing when upgrading. Release 2.7_2.2 This is a release of Data Packs that is compatible with Q2.7 and Parsers 4. This release contains all new functionality developed by the Data Packs team. Please follow the guidance below in order to make a decision around which release of Data Packs to use: Using Parsers 3 If the project is utilising Parsers 3 then they must use the 2.7_1.3 release. The project can then make a risk-based decision on the level of regression testing performed given there is no new functionality contained within this release. This is currently the last planned version of Data Packs that will be compatible with Parsers 3 and therefore there is an inherent assumption that projects will need to migrate to Parsers 4 prior to upgrading to Quantexa 2.8 if they use Data Packs. Using Parsers 4 If the project is utilising Parsers 4 then the default option should be to use the 2.7_2.2 release. Projects should only consider using the 2.7_2.1 release if the following criteria are met: The project is performing an incremental upgrade from QE2.6 to QE2.7, with no plans to update to Q2.8 in the near future. There is ongoing Quantexa presence within the Delivery Team for the project If the above criteria is met, then the option should be discussed with the project Architect and/or Technical Delivery Oversight, and a decision made as a project team. There will not be a 2.8_2.1 release and therefore projects that utilise the 2.7_2.1 release will be introducing two sets of functional changes when upgrading to QE2.8 (i.e. the changes in the 2.7_2.2 and 2.8_2.3 releases) Recent Deprecations Transaction Viewer As of version 2.4 of The Quantexa Platform, Transaction Viewer has been deprecated and will be removed in a future major release. Data Viewer is the recommended replacement for viewing records from your data sources, which is powered by the Explorer API. Please refer to the Data Viewer Migration Guide for more details. Quantexa Parsers 3.X 3.X version of Quantexa Standard Parsers and Quantexa Data Models has been deprecated and will be removed in a future major release (support will be removed in August 2025). The Parsers 4 migration is not currently advised for projects where any of the following applies: TBML projects (or any project using a parsed business name field as part of a document ID) Correspondent banking projects Projects using the entity-level alerting framework We also advise against migrating to Parsers 4.X alongside any other QP product upgrades or migration. Delivery and DEA are jointly working on ways to reduce risk and effort around this migration, including more extensive testing internally. They will agree and document best practice approach for this migration, including comprehensive guidance for testing (based on benchmarking tests) and supplementary documentation for manual changes as well as guidance for project teams to leverage Entity Resolution (ER) improvements from Parsers 4. More details available in the Quantexa Parsers Release Notes. Software Compatibility Check this page for software compatibility. Please note that Java 8 is no longer supported in Quantexa 2.7. Additional Information Useful resources on the Quantexa Documentation Site: Upgrade Best Practice for instructions before commencing an upgrade. Ongoing development during upgrades for best practice guidance on continuing on with meaningful development during an upgrade. Follow the Release Announcements Topic to receive notifications of releases. Quantexa Release Notes for changes and information specific to different versions of Quantexa.19Views0likes0CommentsHow to Release Your Upgrade: Approaches and Considerations
You have successfully tested your upgrade and are now dev-complete. The final stage of the execution process is to plan and execute the merge of your upgrade branch back into the main codebase, making it the new standard for all future development. A smooth merge is the result of careful coordination and communication. This guide provides a step-by-step checklist to ensure your upgrade is integrated without disruption. Step 1: The Final Polish - Apply the Latest Patch Before you begin the sign-off and merge process, ensure you are integrating the best possible version of the software. It is strongly recommended to update your upgrade branch to the latest available minor version or patch for your target release (e.g., if you upgraded to v2.8.2, check if v2.8.3 is now available). These releases contain the latest bug fixes and security updates but no breaking changes, making them low-risk to apply. You can find instructions for this process in the documentation on applying a minor upgrade. Step 2: Gain Stakeholder Sign-Off Before planning the merge, you must get formal approval from all relevant stakeholders. Do not wait until the last minute for this; seek sign-off as soon as individual components are ready for review. Action: Secure approval from the following groups: Product Owners / Business Users: Present your UAT results and regression reports to demonstrate that the functionality is correct and performance is acceptable. Technical Leads / Architects: Get your code reviewed. Your Pull Requests should be approved well in advance of the target merge date to allow time for feedback and iteration. Downstream Consumers: If other teams or applications consume your project's APIs or data outputs, ensure they have been part of the testing and have signed off on the changes. Pro-Tip: If your ETL upgrade was completed first, get it reviewed and signed off while the rest of the upgrade is still being tested. Step 3: Prepare and Educate Your Users Proactive communication is key to a smooth transition for everyone involved with the project. Action: Prepare your user groups for the upcoming change. For Technical Users (Developers): Produce a clear changelog or technical release notes. Inform them of any changes to development environments, deployment strategies, or key data structures that will impact their day-to-day work once the merge is complete. For End-Users (Investigators, Analysts): Provide an updated user guide or release communication that documents any major changes to the User Interface (UI) or application functionality that they will see in the next production release. Step 4: Plan the Merge Logistics This step involves defining the precise technical strategy for merging your code. Action: Define your merge plan, considering project structure. For Split Repositories: Your merge plan must respect inter-repository dependencies. For example, a shared "model" repository must be merged, published, and consumed by the "ETL" and "Apps" repositories before they themselves can be merged. Document the exact order of operations. For Multi-Use Case Platforms: Refer back to your upgrade plan. Ensure that the merge of your upgraded use case will not negatively impact the build or development process for other use cases running on the same platform. Step 5: Coordinate and Execute the Merge to Main This is the final, coordinated event to integrate the upgrade into your main codebase. Action: Follow this checklist for a smooth merge: Set a Merge Date: Align with all development teams on a specific date and time for the merge. Communicate a Merge Freeze: Announce a "merge freeze" for the main branch leading up to the agreed-upon merge time. This prevents last-minute, unrelated changes from introducing conflicts or instability. Prepare a Rollback Strategy: Have a documented plan in place for how you would revert the merge commit (e.g., using git revert) in the unlikely event of a critical, unexpected issue post-merge. Execute the Merge: With all preparations in place, merge your feature/platform-upgrade branch into main. Post-Merge Communication: Announce to all technical teams that the merge is complete. All new feature development must now branch from the updated main branch. With the merge complete, your upgrade is now part of the main line of development. It will be deployed to production as part of your project's next standard release, following your organization's established release playbook.16Views0likes0CommentsTesting Your Upgrade: Practical Considerations
You have successfully migrated your code to the target version, and you have a stable, building repository. Now, you will execute the Test Plan you created during the planning phase. This guide provides a practical, step-by-step methodology for development testing. The core principle is to start with fast, low-cost tests and progressively move to slower, more expensive, and more realistic validation. Do not jump straight to full data runs. Phase 1: The Foundation - Local and Automated Testing This is the fastest and most important feedback loop. The goal is to catch as many issues as possible on your local machine before ever deploying to a shared environment. Validate Automated Tests: Your first step is to ensure your existing automated test suite is working. Unit & Integration Tests: Run your entire suite of automated tests locally. Fix any failures. These tests are your first line of defense and are crucial for validating that core logic within your ETL and Scoring processes is still sound. Local Runtime Testing: Before deploying, try to run components locally. Example: Starting the Quantexa application services on your local machine can quickly reveal runtime dependency conflicts or configuration errors that were not caught at compile time. Fixing these locally is much faster than in a shared environment. Phase 2: Controlled Environment Testing Once your local tests are passing, it's time to move to a shared development or test environment. The goal here is to perform an initial end-to-end run in a controlled manner. Establish a Stable Baseline: To test for regression, you must have a stable point of comparison. Control Your Code: Ensure the main branch (your baseline) and your upgrade branch are based on the same starting commit. A common practice is to create a temporary release-candidate branch from main to use as a stable, unchanging baseline for comparison. Control Your Data: Use a static, well-understood dataset for this testing phase (e.g., a synthetic or a small, sanitized sample of real data). The input data for your baseline run and your upgrade run must be identical. Produce the Baseline Run: Execute your full end-to-end batch process on the main (or release-candidate) branch using your controlled dataset. Document the results meticulously. This includes: HDFS/S3 output locations. Elasticsearch index names. Batch job runtimes and resource profiles. Produce the Upgrade Run: Execute the same end-to-end batch process on your upgrade branch, using the exact same input data. Compare and Analyze: Compare the outputs from the upgrade run against your baseline. The Statistical Profile Testing Framework (SPTF) is the recommended tool for efficiently comparing batch outputs. Expect some changes. Bug fixes or functional improvements in the new Quantexa version can cause expected deviations. Your job is to validate that all changes are explainable and can be traced back to a specific migration or product change. Any unexplained changes are potential regressions that must be investigated. Phase 3: Full Environment Testing (UAT / Staging) Only after you are confident that most issues have been resolved should you move to a production-like environment with real, full-volume data. Full Regression & SIT: Execute your full test plan in this environment. This includes validating integration with external schedulers, security configurations, and any other real-world integrations. This is your best opportunity to catch elusive, data-specific edge cases. Performance Validation: Run your batch processes against full production data volumes. Compare the runtimes against your pre-upgrade performance benchmarks to ensure there are no significant performance regressions. User Acceptance Testing (UAT): This is a vital part of the release cycle. Block out sufficient time for business users to validate the solution and sign off on the upgrade. Factor in time to investigate and resolve any issues found during UAT and re-test. By following this phased approach, you systematically build confidence in your upgrade, ensuring that by the time you reach the most expensive testing phase, you have already eliminated the vast majority of issues in a more efficient manner.26Views0likes0CommentsHow to Perform an Incremental Upgrade
You have completed your planning and preparation, and you are now ready to begin the hands-on execution of the upgrade. This guide provides the step-by-step technical process for performing a single "hop" of an incremental upgrade (e.g., migrating from v2.5 to v2.6). The fundamental principle of an incremental upgrade is to migrate your project one major version at a time, ensuring the codebase is stable and validated at each stage before proceeding to the next. The Key Tool: The Repository Tool The Quantexa Repository Tool will be the primary engine for your upgrade. It uses a file generator called PLOPjs and code-modification tools like Scalafix to automate a significant portion of the migration effort. While powerful, this tool does not cover every scenario, and manual changes may therefore be required. The Workflow for a Single Upgrade Hop For each hop in your upgrade roadmap (e.g., from v2.5 to v2.6), you will follow this four-phase process. Phase 1: Automated Migrations The first step is always to let the automated tooling do the heavy lifting. This is a highly scripted process. For detailed instructions on the commands, refer to the documentation on Running the repository tool. Configure the Tool: Locate the migration-config.json for the version hop you are performing. Update the projectPath to point to your repository. Run Scalafix Migrations: Execute the Scalafix migrations first. Run Plop Migrations: Execute the relevant Plop migrations. It is best practice to run these one by one to create a granular commit history. Pro-Tip: If your project uses split repositories (e.g., separate ETL and Apps repos), only run the migrations relevant to the repository you are currently working on. Phase 2: Manual Migrations With the automated changes applied, you will now execute the manual tasks from the backlog you created in the preparation phase. Your JIRA board is your guide for this phase. Execute Manual Migration Tickets: Work through the JIRA tickets you created for the mandatory manual migrations. Each ticket should contain the context and a link to the specific documentation needed for that single task. Address Customization Tickets: Work through the tickets related to your project's customizations. These tasks involve reviewing how the upgrade has impacted your custom code and applying the necessary fixes to make it compatible. Execute Optional Migration Tickets: If you decided to include any optional migrations in your scope, execute those tickets now. Pro-Tip: Make small, specific commits for each distinct manual change (e.g., "Fix custom scoring function for v2.6 API change"). This creates a clean, traceable history. Refer to project-example to see how the same migrations were applied in Quantexa's reference project. Phase 3: Compile and Validate With the automated and manual changes applied, this phase is focused on compiling the code and validating its integrity by running the automated test suite. Compile the Code: The first action is to run a full build of the repository. If any compilation errors arise from the applied migrations, resolve them. These are typically caused by API changes or updated method signatures that impact custom code. Validate with Unit Tests: Once the repository compiles successfully, run your full suite of automated unit tests. Address any tests that are failing as a result of the upgrade changes. A successful, clean build with all unit tests passing marks the end of the development work for this hop. Phase 4: Intermediate Validation (Optional but Recommended) Before moving to the next hop, it is highly recommended to perform some level of intermediate validation to catch issues early. Run your automated integration tests. Perform a small, local ETL run to ensure the core process works. If practical, deploy the applications locally to check for runtime errors. Repeat and Finalize Once you have completed this four-phase process for one hop, you repeat the entire workflow for the next hop in your upgrade roadmap (e.g., v2.6 -> v2.7). After the final hop is complete and you have a stable, building repository on your target version, the hop-by-hop development phase is over. You are now ready to proceed with the full Development Testing as outlined in your test plan.59Views0likes0CommentsTurn Your Upgrade Plan into Action: Creating JIRA Tickets
This is the final step of the preparation phase. You have a comprehensive Final Upgrade Plan that details every task required for the upgrade. Now, it's time to translate that plan into a structured, actionable backlog of JIRA tickets for the development team. A well-defined backlog is the bridge between planning and execution. It provides clarity for developers on what needs to be done, allows for accurate progress tracking, and forms the basis of your sprint planning. The Process: From Plan to Backlog The core input for this process is the "In Scope Tasks" table from your Final Upgrade Plan. Each row in that table should become one or more tickets in your JIRA backlog. Head Start: To accelerate this process, a set of template JIRA tickets for specific platform version upgrades can often be provided. Ask your Quantexa contact if a template is available for your upgrade path. You can then import and adjust these tickets to fit your project's specific scope. Action: Create Epics for Major Stages Start by creating a master Epic for the entire upgrade (e.g., "Platform Upgrade to v2.8"). Then, if you are performing a multi-hop upgrade, create a sub-Epic for each "hop" (e.g., "Upgrade 2.5 -> 2.6," "Upgrade 2.6 -> 2.7"). This provides a clear hierarchical structure. Action: Create a Ticket for Each Task Go through your "In Scope Tasks" table and create a JIRA ticket (e.g., a Story or a Task) for each line item. A good upgrade ticket is specific, actionable, and contains all the necessary context. What a Good Upgrade Ticket Looks Like Here are examples of how to structure tickets for different types of upgrade tasks. Example 1: Ticket for an Automated Migration Step Title: Run Repository Tool for v2.7 Migration Description: As a developer, I need to run the Repository Tool to apply the automated migrations for the v2.6 -> v2.7 upgrade. Acceptance Criteria: Repository Tool is executed successfully against the feature/platform-upgrade-v2.8 branch. All automated migrations from the v2.7 configuration are applied. The resulting code compiles successfully. Unit tests pass. Any failures or manual interventions required are documented in the Issue Tracker. Example 2: Ticket for a Manual Migration Step Title: [Manual] Upgrade Document Service Response Handling Description: As per the v2.5 migration guide, the response format for the Document Service has changed. We need to manually update our custom UI viewers to handle the new format, which no longer automatically converts Long values to datestrings. Reference: v2.5 Migration Guide#DocService Acceptance Criteria: All custom document viewers are updated to correctly parse and display dates from the new Document Service response. The UI correctly displays document information without errors. Example 3: Ticket for a Tech Debt Task Title: [Tech Debt] Decommission 'Legacy Data Feed' Description: As part of the upgrade, we will remove the unused 'Legacy Data Feed' configuration and all associated code from the ETL process. Reference: Project Backlog Item #1234 Acceptance Criteria: All configuration files related to the legacy data feed are deleted. All code that references the legacy data feed is removed. The main ETL process runs successfully without the legacy code. Planning for Parallel Work While many upgrade tasks are sequential, look for opportunities to parallelize work where possible. This is highly dependent on your project's specific architecture. For example, once the core ETL upgrade is complete and tested, the work to upgrade a separate "Scoring" repository and a separate "Apps" repository can often be done in parallel by different developers. With your backlog created and prioritized, the preparation phase is complete. Your team now has a clear set of tasks to begin the hands-on execution of the upgrade.21Views0likes0CommentsUnderstanding Support During Upgrades: Where to Turn When Issues Arise
During an upgrade, you may encounter unexpected issues, from dependency conflicts to confusing migration errors. Knowing where to turn for help, and what information to provide, is key to resolving these issues quickly and keeping your project on track. This guide outlines the recommended support process, from self-service troubleshooting to engaging with the Quantexa Community and support options. The Support Process: A Tiered Approach Follow this tiered process to get your issues resolved efficiently. Tier 1: Self-Service & Troubleshooting Before reaching out for help, there are several steps you can take to diagnose the issue yourself. This is often the fastest way to a resolution. Review Your Trackers: Check your Customization Tracker and Issue Tracker. Is this a known issue or related to a high-risk customization you've already identified? Has another team member already logged and solved a similar problem? Consult project-example: Quantexa's reference project is an invaluable debugging tool. Verify Migrations: project-example contains a commit history showing exactly how each automated and manual migration was applied for a given version hop. Compare the problematic area of your code against the equivalent in project-example to see if you missed a step or implemented a change differently. Identify Best Practice Deviations: A high percentage of upgrade issues stem from project customizations that deviate from Quantexa best practice. If you are having trouble with a component, comparing its structure and patterns to the project-example implementation can often highlight the difference that is causing the problem. Check for Common Problems: Many upgrade issues fall into a few common categories. Before digging deeper, review this checklist: Is it a dependency clash? Upgrades often bring new versions of third-party libraries. If you see errors like NoSuchMethodError or ClassNotFoundException, it's often a sign that a library version is incorrect. Ensure your project is sourcing library versions from the official Quantexa Bill of Materials (BOM) wherever possible. Is it a missing dependency? If your build is failing because it cannot download a specific artifact, double-check that you have uploaded all the required dependency bundles (including for components like Data Packs) to your artifact repository. Is it a customisation-related breaking change? If an error is occurring in or around a piece of custom code, it is highly likely that an underlying product API or class it was relying on has changed. This is the most common source of upgrade issues. Tier 2: Community & Formal Support If you've ruled out common problems and are still stuck, it's time to engage with the wider Community and Quantexa's formal support options. Action: Raise a Support Request on the Quantexa Community Site. This is the primary mechanism for getting help. To ensure your request is handled efficiently, follow these best practices: Read the Guides: Familiarize yourself with How to Maximize Value from Community Support and How to Write a Good Support Request. Use the 'Upgrade' Tag: When creating your post, add the Upgrade tag. This helps route your request to the correct subject matter experts (SMEs) more quickly. Provide Rich Context: Your support request should be as detailed as possible. Include: The version you are upgrading from and to. The exact error message and full stack trace. A description of what you were doing when the error occurred. Relevant code snippets (if possible). Details from your troubleshooting so far. By following this structured approach, you can ensure that when you do need help, you get a fast and effective resolution, minimizing delays to your upgrade timeline.40Views0likes0CommentsUpgrading Without Stopping: Development Strategies and Environment Planning
Before the hands-on upgrade work begins, it is critical to establish a strategy that allows the upgrade to proceed in parallel with normal, business-as-usual (BAU) development. Freezing your project's main branch for the duration of a potentially multi-week upgrade is not a viable option. This guide outlines the best practices for setting up your code branching and environment usage plans to ensure the upgrade work is isolated but not divergent, allowing both your upgrade team and your BAU team to work productively and without conflict. The Code Strategy: Isolate with a Long-Lived Upgrade Branch The core of a parallel development strategy is to isolate the upgrade work in a dedicated, long-lived branch. This creates a separate workspace for the upgrade team while allowing the BAU team to continue merging new features and bug fixes into the main branch as usual. Action: Create your upgrade branch. Create a new branch from the latest version of your main branch. A common naming convention is feature/platform-upgrade-v2.8. Action: Establish a process for staying in sync. An upgrade branch that is never updated from main will quickly become impossible to merge. It is essential to have a regular process for incorporating ongoing BAU development into your upgrade branch. This is covered in detail in the documentation's guide to Ongoing development during upgrades. The key activities for the upgrade team are: Rebase Regularly: On a consistent schedule (e.g., weekly), rebase the upgrade branch against main. This pulls in the latest BAU changes incrementally, making the final merge much simpler. Maintain a Changelog: As you work, keep a simple changelog. Note down any changes or decisions made that will impact other developers when the final upgrade is released (e.g., "The custom-function-x has been replaced with the new new-native-function-y"). Update Your Trackers: Keep your issue and effort trackers up to date. This provides real-time visibility on progress and ensures knowledge is not lost. The Environment Strategy: Plan for Shared Resources Just as your code needs to be isolated, your execution environments need to be managed to prevent conflict. Upgrade tasks, particularly full batch runs, can be resource-intensive and interfere with other development or testing activities if not properly coordinated. Action: Define your upgrade environment. Designate a specific development or test environment for the upgrade work. This ensures that the upgrade team's activities, such as running a newly migrated ETL process for the first time, do not impact BAU sprint testing in another environment. Action: Establish communication protocols for shared resources. Your project likely shares key infrastructure components like a Spark or Elasticsearch cluster across multiple teams or use cases. It is vital to establish clear rules of engagement to avoid contention. Your project should have a defined process for: Announcing when resource-intensive batch jobs will be run (e.g., via a dedicated Teams/Slack channel). Distinguishing between short, small jobs and long-running overnight jobs. Resolving prioritization conflicts if two teams need to run large jobs at the same time. For larger programmes, a brief daily stand-up focused on cluster usage can be highly effective for aligning priorities across teams. By clearly communicating intentions, you can prevent most environment-related conflicts before they happen.24Views0likes0Comments