Recent Discussions
Q Assist FAQs
We are extremely excited to announce that Q Assist is now available to select customers in Early Access. You can read more about Q Assist and how it works on our Q Assist Community Announcement post. Below you will find some frequently asked questions about Q Assist. What is Q Assist? Q Assist brings Generative AI capabilities to the Decision Intelligence Platform and the solutions we sell in market. It is a modular platform component that includes a conversational UI, orchestration capabilities, configuration settings, and scalable APIs for copilots and LLMs. It embeds itself into everyday tasks and workflows for a more productive workforce and is grounded in the connected and contextual data and functionality our Decision Intelligence Platform offers. Q Assist connects a customer licensed and deployed LLM and the Quantexa Platform seamlessly, orchestrating between various Quantexa services like Search, Explorer, Graph, and Scoring to answer user business questions via a natural language interface. Is Q Assist a chatbot? While Q Assist will indeed offer a chat-like interface and functionality, Q Assist is much more than just a chatbot. Q Assist works alongside Quantexa users, embedding itself into everyday tasks and workflows for a more productive workforce and is grounded in the full set of data and functionality our Decision Intelligence Platform offers. Is Q Assist an LLM? No, Q Assist is not a Large Language Model (LLM). Q Assist connects customer-deployed LLMs and the Quantexa Platform. We expect customers to deploy their own LLMs (separately to their Quantexa deployment). Does Q Assist have RAG capabilities? Yes. Q Assist leverages the Contextual Fabric and Decision Intelligence Platform components such Entity Resolution, graphs, scoring, and transaction data, for example, as additional contextual information. We call this Contextual RAG. What is the problem Q Assist is solving for customers? Q Assist simplifies the integration, contextual grounding, and access to critical insights and data through seamless connectivity, contextual awareness, and conversational AI capabilities. Benefits for users: Q Assist enhances productivity and streamlines work for Quantexa's Decision Intelligence Platform users through a natural language interface to: Query the Quantexa Platform to investigate large and complex networks or to get information on a customer. Find and summarize risks, insights, and opportunities. Get summarized information about people, businesses, and transactions from connected data and relationships identified. Generate reports such as escalation reports, prospecting reports, and SAR narratives, for example Benefits for teams: (Efficiency) Go from research to action in minutes, not days Streamline and augment analysis, research, and reporting tasks so your frontline teams spend time on higher value strategic work. (Effectiveness) Converse with your data to uncover insights Empower knowledge workers with the data and insights they need from disparate sources to make better decisions. (Trust) Drive clarity with context from prompt to response Make trusted, traceable, and consistent decisions and reduce inaccuracies by grounding responses in contextual data. (Consistency) Level the playing field Establish consistent, secure workflows that enforce best practice, compliance, and alignment with organizational standards. What is the Orchestration Layer? Q Assist's orchestration layer is the “conductor” that manages and coordinates all the moving parts within Q Assist. It ensures that user inputs are processed, and contextual data is incorporated from relevant platform components, and outputs are generated in a coherent, consistent, and explainable manner. The orchestration layer handles tasks like: Prompt Management: Ensuring that the correct context is applied to each request. Data Integration: Retrieving and integrating information from multiple sources to enrich the response. Agentic Architecture: Workflow coordination and routing requests to the appropriate models and platform components, applying business rules and guardrails, and aggregates results. In short, the orchestration layer serves as the control hub that ties together the complexities of Gen AI, delivering unified and context-aware outputs tailored to the user’s needs. Does Q Assist require an LLM? Yes, the Customer will be required to license and deploy an LLM themselves for Q Assist to work. This should not be a barrier, though, as most organizations we have spoken to are in one way or another evaluating and using LLMs within their organizations. Does Q Assist work with all LLMs? Q Assist has been built to be LLM agnostic meaning it will be compatible with an extensive list of the most popular foundational models available on the market including: OpenAI Anthropic Mistral Gemini Llama DeepSeek And more… How is Q Assist licensed? Q Assist is available in Quantexa Platform version 2.7 and is licensable as an optional add-on to existing Financial Crime, Fraud, Customer Intelligence, KYC, and Risk solutions. In this release, Q Assist is limited to customers in EMEA and North America for English Language deployments only. *For more information on licensing and pricing please contact your Technical Account Partner or schedule a demo here. How do I see a demonstration of Q Assist? We would more than happy to take you through a demo of Q Assist. Please request a demo and we will be in touch.Mike_Waldron23 days agoQuantexa Team23Views1like0CommentsAdvanced Language Parsers Release
We are excited to announce the release of our new Advanced Language Parsers, designed to support the accurate parsing of non-Latin alphabets natively in the Quantexa platform. This new capability will enable our customers to build contextual insights from across their data estate and expand Quantexa’s use in a wider range of geographies. In this first release we will support Japanese language parsing. Parsers in the Quantexa Platform Quantexa is very well known for its best-in-class Entity Resolution. Parsers play a significant role in making our Entity Resolution as accurate as it is. Parsing is the process of extracting relevant information from ingested data and transforming it into a structured format that can be easily analyzed. For example: in a customer system you’d typically have a record such as ‘Mrs. Jane Doe’. Parsing will extract it into manageable pieces - Title: Mrs.; GivenName: Jane, FamilyName: Doe. It would do the same for a record of a different format too, such as ‘Jane Doe, Mrs.’ as it identifies the different components. The more complicated the data, the more processing is needed to prepare for the high-quality Entity Resolution, for example, translation, transliteration, normalization of the data, etc. Quantexa’s existing Standard Parsers are proven to parse data with high accuracy, while providing the ability to incorporate cultural differences and increase the accuracy of parsing of data from specific geographies by tailoring the Parsers. However, they work best with data in Latin character sets. For more information about Quantexa’s Parsers see our documentation. In order to process data in alphabets other than Latin out of the box, we have created ML-powered Advanced Language Parsers with the first release of the Advanced Japanese Parser (more Advanced Parsers are on the roadmap for later this year). This will significantly streamline Data Ingestion and result in far more accurate Entity Resolution for these non-Latin languages. By the way, now you can explore our roadmap and give feedback on our features and functionality in our Product Roadmap & Ideas Portal! Be a part of our product development! What are we working with? Japanese words can come in 3 different scripts: Kanji (Traditional Chinese Characters) Hiragana (Phonetic lettering system, used for words not covered by Kanji, and for grammatical inflections) Katakana (Phonetic lettering system, used for transcription of foreign-language words into Japanese) Apart from using different character sets, data in Japanese has a lot of interesting characteristics. For example, Japanese addresses are typically formatted from big to small values (from the country > city > street > house number), while Western addresses are usually formatted small to big (house number > street > city > country). Transliteration vs Translation Japanese words can be transliterated to create a Romanized version of the Japanese words using Latin script – Romaji. Or translated – so that English equivalent of the word is used if exists. Japanese Romaji English ソニーグループ株式会社 Sonī Gurūpu Kabushiki-gaisha Sony Group Corporation What is included in Advanced Parsers? Advanced Japanese Parsers includes Individual, Business and Address parsers. Individual parser Based on a library provided by the CJK institute which tokenizes and transliterates characters representing Japanese names Library consists of code and a database to be distributed Code makes calls to the database to retrieve most likely transliterations of Japanese names based on combinations of input characters Business Parser Uses existing business parser architecture with Japanese standardizations Translates using lookup from JMDict Transliterates using two third-party tools Address Parser Uses an AI model – a ‘Mixed Field Parser’ trained on Japanese data for parsing addresses Transliterates only (no translation) using 2 third-party transliterators Can produce enriched variants using publicly available address postcode information Also, a new configuration of Email Parser was created to handle emails with Unicode characters (including Japanese). What is needed to configure Japanese Parsers? To create entities with the Japanese data, you will need to take the following steps: Add data sources that contain Japanese names, addresses, businesses. For the data sources with Japanese data, update the parse method to use the Advanced Japanese parsers: Advanced Parser will be applied if the input address contains Japanese characters. If input contains Latin characters only – parse data using standard (composite) parsers Modify the entity files to use new compound groups. Add custom Japanese resolution templates + compounds to the resolver config. Run ETL with the correct usage of the Advanced Parsers that include CJK and MFP files/Spark config. Important to know For now, Advanced Language Parsers are an experimental release in Parsers 4.2.1 version. Advanced parsers include a few tools (including an ML model) that are targeted to increase the accuracy of the data processing and subsequently – ER. The trade-off of accuracy is performance. Users can expect an increase in runtime (compared to the standard parsers), and on average 2x increase in Elastic Index sizes. The good news is that these estimations would only be applied on the % of the data that is in Japanese characters and will not affect the figures for the data processed by standard parsers. For more information about performance and testing, check the Release Notes. How can I get Advanced Japanese Parser for my project? Full information about Advanced Japanese Parser is available on the Doc site. However, since it is an experimental release of the functionality, please reach out to @Anastasia Petrovskaia if you feel that the parser is applicable for your project. We are working on adding this capability to the Demo environment and targeting March 2025 with this piece of work. What is next? Adoption and feedback from the users would be a big part of maturing the Advanced Parsers, so there are no immediate plans to move the capability straight to EA/GA. You can provide feedback directly to the Advanced Language Parsers for Non-Latin Scripts using the Product Roadmap & Ideas Portal. The next Parser release will be focused on the improvements of the Standard Parsers. More Advanced Language Parsers for different languages/countries (e.g. Chinese, Arabic) are expected in H2 2025. For more information reach out to Anastasia Petrovskaia .63Views1like0CommentsFinCrime Detection Pack 0.4 Release
We are excited to announce version 0.4 of our FinCrime Detection Pack is now available in Early Access. This release introduces new features that will increase flexibility, improve score coverage and optimise score configuration to better capture desired behaviours, characteristics and events. This builds on the functionality released in version 0.3 of the FinCrime Detection Pack. Feature Highlights: You can now apply score logic on a targeted subset of transactions to help you uncover more specific underlying patterns that might otherwise be overlooked. You can now configure transaction score parameters based on the segment a score subject belongs to. This allows you to account for differences and ensure that you are capturing the behaviours and events you’re interested in. You are now enabled to select from a range of appropriate Event Windows to ensure that a score is targeting the desired behaviour or event. Target new risks with additional Score Types For full details of the release, including compatible Quantexa Platform versions and minor enhancements, please see the Quantexa Documentation site. Release notes Migration guide Target a subset of transactions with Transaction Filtering What do we want to achieve? Our score logic, when applied across all transactions, provides a holistic view of the behaviour, characteristic or event we aim to capture. However, we may also need to apply the same score logic on a targeted subset of transactions to help us uncover more specific underlying patterns that might otherwise be overlooked, thereby strengthening the decisions we make. Previously, users would need to write custom code to target specific transactions with their scores, requiring Scala knowledge and incurring additional product maintenance costs. These limitations hindered our ability to efficiently and effectively analyse underlying behaviours, characteristics and events without the use of deployment-specific customisations. Functionality Highlights The Transaction Filter allows users to apply transaction level filters to any transaction score within the FinCrime Detection Pack. Each score configuration where a set of transaction filters are applied will be treated as a distinct score, which means that they can be managed independently. Any attributes available at the transaction level can be used to set transaction filters using QSL expressions. Example The example below shows the total value of transactions each month. At first glance, the total value suggests no change in behaviour. However, when we examine the underlying transactions, we observe a significant change in the value of cash deposits being made. Users are now able to target this underlying behaviour using the Transaction Filter feature. The QSL expression in the video below creates an instance of the “Customer Rapid Monthly Increase In Value” score that targets increases in the value of cash deposits. Benefits Increased flexibility and improved score coverage: Users are now enabled to apply transaction filters without writing custom Scala code or needing a new score to be prioritised and released. Easy to maintain and deploy new instances of existing scores. No Scala skills required. Important Considerations Project teams will need to provide and maintain documentation for the transaction filters they’ve applied to the underlying score logic. Users require basic QSL knowledge to apply transaction filters. Configure transaction score parameters based on the segment a score subject belongs to with Subject Level Segmentation What is a segment? A segment is a group of entities, such as individuals or businesses, who share common characteristics, such as interests, demographics, or behaviour. What do we want to achieve? Determining when and how transaction scores should be used in the decision-making process may differ depending on the segment a score subject belongs to. We need to account for these differences to improve efficiency and ensure we capture the behaviours and events we’re interested in. Previously, there was no standard approach available within the FinCrime Detection Pack that allowed users to configure transaction score parameters based on the segment a score subject belongs to. The existing methods needed to be made available prior to a release, required additional ongoing product maintenance and at times necessitated users having sufficient Scala skills. These limitations hindered our ability to account for differences that are specific to the segment a score subject belongs to. Functionality Highlights Segmentation allows users to configure transaction score parameters based on the segment a score subject belongs to. Users can tune parameters (incl. severity) based on available segments. Users have the option to define all or a subset of segments from the available segmentation in their transaction data model. Example The example below shows the average monthly change in the value debited and credited to customers across different segments. We observe that large businesses have a much higher average compared to other segments. Suppose we want to set a threshold to only focus on large increases in funds being deposited. If we don’t consider segments when setting this threshold, we may overlook individuals or trigger false positives for large businesses. Users are now able to configure score parameters based on the segments above using the Segments feature. The QSL expression in the video below creates an instance of the “Customer Rapid Monthly Increase In Value” score that can be configured based on the segments above. Benefits Optimised score targeting: Users are now enabled to configure parameters based on a score subject’s segment without needing a new score or override method to be prioritised and released. Ability to account for differences that are specific to the segment a score subject belongs to. Easy to maintain and apply segmentation to existing scores. No Scala skills required. Important Considerations Setting parameters based on segmentation is optional and not mandatory. Any attributes to be used for the segmentation of scores must be available in the transaction data source. This feature cannot be used to exclude segments from being scored. This functionality will be introduced in a future release. Time based parameters such as lookback period, observation period and event window are configured once and cannot be defined per segment. Target your detection with Configurable Event Windows What do we want to achieve? The Event Window is a parameter that specifies the time period over which a targeted behaviour or event is to be observed. There are some scores which may require different Event Window settings based on the use case that the score is applied to and the behaviour or event that a score aims to identify. Currently, the Event Window is fixed to monthly within the FinCrime Detection Pack. This fixed setting is not appropriate for all use cases and scores. We want to provide the option to select from a range of appropriate Event Windows to ensure that a score is targeting the desired behaviour or event. Functionality Depending on the selected score, a user can now choose between the following time periods to set the Event Window: 3 days Weekly Fortnightly Monthly Example “Customer With Rapid Movement Of Funds” is a score that is characterised by a large amount of funds moving through a customer’s accounts over a short interval. Both “3 days” and “Weekly” options could be used to identify this behaviour depending on the use case. Users can now select the option that best applies to their use case, or even target both time periods across different score instances. Benefits Optimised score targeting: Users are now enabled to select from a range of appropriate Event Windows to ensure that a score is targeting the desired behaviour or event. Easy to maintain. No Scala skills required. Target new risks with additional Score Types We will be introducing 9 additional score types to the FinCrime Detection Pack which means that we will support 35 score types in the 0.4 release. These score types can be combined to create 76 distinct scores. Entity Linked To Entity With Listed Internal Risk Rating Entity With Specific Internal Risk Score Entity Linked To Entity With Listed Industry Entity With Listed Industry Relationship Structuring Customer Structuring Customer Rapid Movement Of Funds Transaction After Period Of Dormancy Relationship With Transaction Score Link Additional Features Integration with QSL Graph Scripting Template expansions are provided for all of our supported scores. Multiple instances of the same score can now be grouped in configuration files for easy maintenance.Elizabeth_Lau8 months agoQuantexa Team272Views1like1CommentDetection Packs 0.3 Release
We are excited to announce the release of version 0.3 of Detection Packs. This is the third major release of Detection Packs and builds on the 0.2 version which introduced our low-code interface. For full details of the release, including compatible Quantexa Platform versions and minor enhancements, please see the Quantexa Documentation site. Expanded Score Coverage This release focuses on the expansion of our score coverage and general maturing of the product, with no significant changes to the interface, enabling those projects already using 0.2 to upgrade to 0.3 easily. 2 new transaction score pipelines were added, each with 4 score types such as “Transaction with Different Currencies” and “Transaction in Listed Jurisdiction”. 5 new Entity Record score types have been added, such as “Highly Connected Entity” and “Entity With Listed Type”. 5 new Entity Network score types have also been added, such as “Entity With Indirect Relation To Listed Jurisdiction” and “Entity Linked To Entity With Listed Status”. In total, the Fincrime Detection Pack now contains 26 pre-written configurable, re-usable, and extensible Score types which can be combined to produce a total of 56 Scores. For the full documentation on these please see our technical documentation. These new scores, in addition to those already in the Fincrime Detection pack, can be extended further to meet project-specific needs by utilizing the customization options documented on the Quantexa Documentation Site. The collection of supporting Reference Scores has continued to expand even as several have been adopted into this Detection Packs release. As a reminder, Reference Scores are pre-written Scores created in conjunction with our users to provide additional Scores over and above the core Detection Pack for FinCrime. They also cover additional use cases outside of FinCrime, and the catalogue currently contains over 50 further scores. Recent updates to the Reference Scores include a new correspondent banking use-case, and updates to transaction scores such as ‘Transaction With Mirrored Trading’ and ''Transaction in High Proportion of Low Value Security”. Simplified User Experience In addition to the expanded scoring options available, the Detection Packs user experience has been simplified by reducing the amount and complexity of configuration required for your project. In v0.2 of Detection Packs, projects which only wished to use a subset of supported scores were still required to setup all of their data mappings. From v0.3 this is simpler with various configuration options no longer required if not utilised. Coming soon to Detection Packs We are currently targeting mid 2024 for the 0.4 release of Detection Packs, with lots of exciting new features. Here are some of the planned features our users can look forward to in this release and beyond: Adoption of many more Reference Scores into officially supported, configuration-driven Detection Packs Scores Simplified graph-scripting support Dynamic pipeline generation Additional use case support, such as an Entity-level detection model Improved out-of-the-box testing and tooling Multi-typology and Multi-product Scorecard support Score versioning and seamless upgrade supportGreg_Jones2 years agoQuantexa Team113Views1like0Comments