14. Change log/history¶
- Rudolf Cardinal <email@example.com>, 2015–.
- Francesca Spivack, 2018–.
- Bug fix for incremental update (previous version inserted rather than
updating when the source content had changed); search for
- Checks for missing/extra fields in destination.
- “No separator” allowed for
crate_anon.anonymise.anonregex.get_date_regex_elements(), allowing anonymisation of e.g.
- New default
crate_anon.anonymise.anonregex.get_date_regex_elements(), allowing anonymisation of ISO8601-format dates (e.g.
- Similar option for
- Similar option for
- Options in config to control these.
- Fuzzy matching for
string_max_regex_errorsoption in config. The downside is the potential for greedy matching; for example, if you anonymise “Ronald MacDonald” with “Ronald” and “MacDonald”, you can end up with “XXX MacXXX”, as the regex greedy-matches “Donald” to “Ronald” with a typo, and therefore fails to process the whole “MacDonald”. On the other hand, this protects against simple typos, which are probably more common.
- Audit database/table.
- Create an incremental update to the data dictionary (i.e. existing DD plus any new fields in the source, with safe draft entries).
- Data dictionary optimizations.
- Whole bunch of stuff to cope with a limited computer talking to SQL Server with some idiosyncrasies.
- Ability to vary audit/secret map tablenames.
- Made date element separators broader in anonymisation regex.
- bugfix: date regex couldn’t cope with years prior to 1900
crate_anon.anonymise.patient.gen_all_values_for_patient()was inefficient in that it would process the same source table multiple times to retrieve different fields.
- simplification of
SCRUBMETHOD.CODE, particularly for postcodes. (Not very different from
SCRUBMETHOD.NUMERIC, but a little different.)
debug_row_limitapplies to patient-based tables (as a per-thread limit); was previously implemented as a per-patient limit, which was silly.
- Indirection step in config for destination/admin databases.
ddgen_allow_fulltext_indexingoption, for old MySQL versions.
- bugfix: if a source scrub-from value was a number with value
'.', the regex went haywire… so regex builders now check for blanks.
regex.ENHANCEMATCHflag tried unsuccessfully (segmentation fault, i.e. internal error in
regexmodule, likely because generated regular expressions got too complicated for it).
SCRUBMETHOD.WORDS[? typo in changelog!]
debug_max_n_patientsoption, used with
crate_anon.anonymise.anonymise.gen_patient_ids(), to reduce the number of patients processed for “full rebuild” debugging.
v0.10, 2015-09-02 to 2015-09-13
- Opt-out mechanism.
- Default hasher changed to SHA256.
- Bugfix to datatypes in
- Split main source code for simplicity.
- Database interface renamed from mysqldb to mysql, to allow for PyMySQL support as well (backend details otherwise irrelevant to front-end application).
- Added TRID.
- Code cleanup.
- HMAC for RID generation, replacing simpler hashes, for improved security.
- New option:
- Removed option:
- New options:
- Transition from
cardinal_pythonlib.rnc_dbto SQLAlchemy for anonymiser database interface.
- Environment variable changed from
CRATE_WEB_LOCAL_SETTINGSand coded into
- Web front end now happy getting structure from SQL Server and PostgreSQL.
- Windows support. Windows XP not supported as Erlang (and thus RabbitMQ) won’t run on it from the distributed binaries. Windows 10 works fine.
- Semantic versioning.
- Fixes to work properly with SQL Server, including proper automatic conversion
TEXTfields. Note: also needs SQLAlchemy 1.1 or higher , currently available only via (1) fetching source via
git clone https://github.com/zzzeek/sqlalchemyand changing into the ‘sqlalchemy’ directory this will create; (2) activating your CRATE virtual environment; (3)
pip install .to install SQLAlchemy from your source copy. Further note: as of v0.18.2, this is done via PyPI again.
- Opt-out management (1) manually; (2) via disk file; (3) via database fields.
- ONS Postcode Database.
- RiO preprocessor.
- Third-party patient cross-referencing for anonymisation.
- The ‘required scrubber’ flag, as a safety measure.
- Recordwise view of results in web interface.
- Static type checking.
- Regular expression NLP tools for simple numerical results (CRP, ESR, WBC and differential, Na, MMSE).
- v0.18.1 (2016-11-04): new
anonymise_numbers_at_numeric_boundaries_onlyoption, to prevent e.g. ‘23’ being scrubbed from ‘1234’ unless you really want to.
- More built-in NLP tools by now (height, weight, BMI, BP, TSH). MedEx support.
v0.18.2 to v0.18.8, 2016-11-11 to 2016-11-13
- Too many version numbers here because git connection unavailable for remote development.
- Requirement upgraded to SQLAlchemy 1.1.3, now SQLAlchemy 1.1 and higher are available from PyPI.
- Support for non-integer PKs for NLP, to allow us to operate with tables we
have only read-only access to. This is a bit tricky. To parallelize, it helps
to be able to convert a non-integer to an integer for use with the modulo
%. In addition, we store PK values to speed up incremental updates. It becomes messy if we have to cope with lots and lots of types of PKs. Also, Python’s
hash()function is inconsistent across invocations . This is not a cryptographic application, so we can use anything simple and fast . It looks like MurmurHash3 is suitable (hash DDoS attacks are not relevant here) . However, the problem then is with collisions . We want to ask “has this PK been processed before?” Realistically, the only types of PKs are integers and strings; it would be crazy to use floating-point numbers or BLOBs or something. So let’s put a cap at
MAX_STRING_PK_LENGTH; store a 64-bit integer hash for speed, and then use the hash to say quickly “no, not processed” and check the original PK if processed. If the PK field is integer, we can just use the integer field for the PK itself. Note that the
delete_where_no_sourcefunction may be imperfect now under hash collisions (and it may be imperfect in other ways too).
- This system not implemented for anonymisation; it just gets too confusing (PIDs, MPIDs, uniqueness of PID for TRID generation, etc.).
- However, mmh3 requires a Visual C++ 10.0 compiler for Windows. An alternative would be to require pymmh3 but load mmh3 if available, but pymmh3 isn’t on PyPI. Another is xxHash , but that also requires VC++ under Windows; pyhashxx installs but the interface isn’t fantastic. Others include FNV and siphash . The xxHash page compares quality and speed and xxHash beats FNV for both (and MurmurHash for speed); siphash not listed. Installation of siphash is fine. Other comparisons at . Let’s use xxhash (needs VC++) and pyhashxx as a fallback… only pyhashxx only supports 32-bit hashing. The pyhash module doesn’t install under Windows Server 2003, and nor does xxh, while lz4tools needs VC++. OK. Upshot: use mmh3 but fall back to some baked in Python implementations (from StackOverflow and pymmh3, with some bugfixes) if mmh3 not available.
delete_where_no_sourcethen failed as expected with large databases, so reworked to be OK regardless of size (using temporary tables).
- Python 3.5 can handle circular imports (for type hints) that Python 3.4 can’t, so some delayed and version-conditional imports to sort that out in the NLP code.
- Provide source/destination record counts from NLP manager, and better progress indicator for anonymiser.
- Optional NLP record limit for debugging.
- Speed increases by not requesting unnecessary
- Commit-every options for NLP (every n bytes and/or every n rows).
- Regex NLP for ACE, mini-ACE, MOCA.
- Timing framework for NLP (for when it’s dreadfully slow and you think the problem might be the source database).
- Significant NLP performance enhancement by altering progress DB lookup methods.
Regex NLP: option in
crate_anon.nlp_manager.regex_parser.SimpleNumericalResultParserto take absolute values, e.g. to deal with text like
Na-142, K-4.1, CRP-97, which use
-simply as punctuation, rather than as a minus sign. Failing to account for these would distort results.
No attempt is made to specify maximum or minimum values, which can easily be excluded as required from the resulting data set. One could of course use the SQL
ABS()function to deal with negative values post hoc, but some things have no physical meaning when negative, such as a white cell count or CRP value, so it’s preferable to fix these at source to reduce the chance of user error through not noticing negative values.
take_absoluteoption is applied to: CRP, sodium, TSH, BMI, MMSE, ACE, mini-ACE, MOCA, ESR, and white cell/differential counts. (NLP processors for height, BP already enforced positive values. Weight must be able to handle negatives, like “weight change –0.4kg”.)
Similarly, hyphen followed by whitespace treated as ignorable in regex NLP (e.g. in
weight - 48 kg; though spaces are meaningful for mathematical operations (“a – b = c”), it is syntactically wrong to use
- 4as a unary minus sign to indicate a negative number (–4) and much more likely that this context means a dash.
En and em dashes, and a double-hyphen as a dash (
--) treated as ignorables in regex NLP.
At present, Unicode minus signs (
−) are not handled. For reference :
name character code handling hyphen-minus
Unicode 002D or ASCII 45 minus sign if context correct formal hyphen
Unicode 2010 not handled at present minus sign
Unicode 2212 not handled at present en dash
Unicode 2013 treated as ignorable  em dash
Unicode 2014 treated as ignorable
Improved regex self-testing, including new test framework in
- Full support for SQL Server as the backend.
- Hot-swapping databases (compare MySQL ): you can rename databases, so this seems OK .
- Full-text indexing: optional in SQL Server 2008, 2012, 2014 and 2016
; basic SELECT syntax is
WHERE CONTAINS(fieldname, "word"), and index creation with
CREATE FULLTEXT INDEX ON table_name (column_name) KEY INDEX index_name .... Added to
- Support for SQL query building, with user-configurable selector mechanism.
See Transact-SQL syntax reference . We use the Django setting
settings.RESEARCH_DB_DIALECTto govern this.
- Tweaks/bugfixes for RiO preprocessor, and for anonymisation to SQL Server databases.
- Local help HTML offered via web front end.
- More fixes for SQL Server, including full-text indexing.
- Completed changes to CPFT consent materials to reflect ethics revision (Major Amendment 2, 12/EE/0407).
- Final update/PyPI push for CPFT consent materials.
- Extra debug options for consent-to-contact templates.
- Multi-column FULLTEXT indexes under SQL Server.
v0.18.15-v0.18.16, 2017-03-06 to 2017-03-13
- Full-text finder generates
CONTAINS(column, 'word')properly for SQL Server.
- Bugfix to Patient Explorer (wasn’t offering WHERE options always).
- “Table browser” views in Patient Explorer
- Bugfix to Windows service. Problem: a Python process was occasionally being
“left over” by the Windows service, i.e. not being killed properly. Process
Explorer indicated it was the one launched as
python launch_cherrypy_server.py. The Windows event log has a message reading “Process 1/2 (Django/CherryPy) (PID=62516): Subprocess finished cleanly (return code 0).” The problem was probably that in
cherrypy.engine.stop()call was only made upon a KeyboardInterrupt exception, and not on other exceptions. Solution: broadened to all exceptions.
- Removed erroneous debugging code from
- If you mis-configured the Java interface to a GATE application, it crashed quickly, which was helpful. If you mis-configured the Java interface to MedEx, it tried repeatedly. Now it crashes quickly.
v0.18.18 to v0.18.23, 2017-04-28
- Paper published on 2017-04-26 as Cardinal (2017), BMC Medical Informatics and Decision Making 17:50; http://www.pubmed.com/28441940; https://doi.org/10.1186/s12911-017-0437-1.
- Support for configurable paths for finding on-disk documents (e.g. from a combination of a fixed root directory, a patient ID, and a filename).
v0.18.23 to v0.18.33, 2017-05-02
FN_VALUE_TEXTin code) given maximum length, rather than 50, for the regex parsers, as it was overflowing (e.g. when a lot of whitespace was present). See
- Supports more simple text file types (
- New option:
- Bugfix to CRATE GATE handler’s stdout-suppression switch.
- New option:
- PCMIS preprocessor.
- Support non-integer PIDs and MPIDs. Note that the hashing is based on a string representation, so if you have one database using an integer NHS number, and another using a string NHS number, the same number will hash to the same result if you use the same key.
- Hashing of additional fields, initially to support the PCMIS
CaseNumber(as well as
v0.18.34 to v0.18.39, 2017-06-05
- For SLAM BRC GATE pharmacotherapy app: add support for output columns whose
SQL column name is different to the GATE tag (e.g. when
dose-valuemust be changed to
dose_value); see ``renames`` option. GATE output fields now preserve case. Another option (
null_literals) to allow GATE output of
nullto be changed to an SQL NULL. Also added
_setcolumn to GATE output.
- Fixed Python type-checking bug in
crate_anon.common.extendedconfigparser.ExtendedConfigParser.get_pyvalue_list(); changed from
- Support for MySQL
ENUMtypes. However, see http://komlenic.com/244/8-reasons-why-mysqls-enum-data-type-is-evil/ also!
To v0.18.46, 2017-07-28 to 2017-08-05
- Fix to
coerce_to_date(for date types), renamed to
- NLP bug fixed relating to a missing
- Fixes to NLP, including accepting views (not just tables) as input. Note that
under SQL Server, you should not have to specify ‘dbo’ anywhere in the config
(but consider setting
ALTER USER... WITH DEFAULT SCHEMAas above).
- Manual and 2017 paper distributed with package.
- Shift some core stuff to cardinal_pythonlib to reduce code duplication with other projects.
- Clinician view: find text across a database, for an identified patient. See
- Rationale: Should privileged clinical queries be in any way integrated with CRATE? Advantages would include allowing the receiving user to run the query themselves without RDBM intervention and RDBM-to-recipient data transfer considerations, while ensuring the receiving user doesn’t have unrestricted access (e.g. via SQL Server Management Studio). Plus there may be a UI advantage.
- Clinician view: look up (M)RIDs from (M)PIDs. Intended purpose for this and
the preceding function: “My clinical front end won’t tell me if my patient’s
ever had mirtazapine. I want to ask the research database.” (As per CO’L
request 2017-05-04.) See
- Code to generate and test demonstration databases improved.
v0.18.49, 2018-01-07, 2018-03-21, 2018-03-27, published 2018-04-20
regex) for blacklisting words; this is much faster and allows large blacklists (e.g. a long list of all known forenames/surnames).
- Provides the
crate_fetch_wordliststool to fetch names and English words (and perform in-A-not-B functions, e.g. to generate a list of names that are not English words).
- Extend CRATE’s GATE pipeline to include or exclude GATE sets, since some
applications produce results just in one set, and some produce them twice
(e.g. in the unnamed set, named
"", and in a specific named set).
- Medical eponym list.
v0.18.50 to v0.18.51, 2018-05-04 to 2018-06-29
IllegalCharacterError possible from
crate_anon.crateweb.research.models.make_excel(); was raised by openpyxl. The problem may be that the Excel file format itself prohibits some Unicode characters; certainly openpyxl does . See gen_excel_row_elements() for bugfix. Not all queries require this, but anything that allows unrestricted textual/binary content does.
Change to CPFT-specific SQL in
crate_anon.crateweb.extra.pdf.CratePdfPlan; this failed to specify
wkhtmltopdf_filename, so if
wkhtmltopdfwasn’t found on the PATH (e.g. via a Celery task), PDFs were not generated properly.
Package version changes:
- amqp from 2.1.3 to 2.3.2; https://github.com/celery/py-amqp/blob/master/Changelog
- arrow from 0.10.0 to 0.12.1; https://pypi.org/project/arrow/
- beautifulsoup4 from 4.5.3 to 4.6.0; https://github.com/newvem/beautifulsoup/blob/master/CHANGELOG
- cardinal_pythonlib from 1.0.15 to 1.0.16
- celery from 4.0.1 to 4.2.0 (no longer constrained by amqp); http://docs.celeryproject.org/en/latest/history/
- chardet from 3.0.2 to 3.0.4
- cherrypy from 10.0.0 to 16.0.2; https://docs.cherrypy.org/en/latest/history.html
- colorlog from 2.10.0 to 3.1.4
- distro from 1.0.2 to 1.3.0
- django from 1.10.5 to 2.0.6; https://docs.djangoproject.com/en/2.0/releases/2.0/
- django-debug-toolbar from 1.6 to 1.9.1
- django-extensions from 1.7.6 to 2.0.7
- django-picklefield from 0.3.2 to 1.0.0
- django-sslserver from 0.19 to 0.20
- flashtext from 2.5 to 2.7
- flower from 0.9.1 to 0.9.2
- gunicorn from 19.6.0 to 19.8.1
- kombu from 4.0.1 to 4.1.0 (no longer constrained by amqp, but kombu 4.2.1 is broken: https://github.com/celery/kombu/issues/870)
- openpyxl from 2.4.2 to 2.5.4
- pendulum from 1.3.0 to 2.0.2; see https://pendulum.eustace.io/history/
- psutil from 5.0.1 to 5.4.6
- pyparsing from 2.1.10 to 2.2.0
- python-dateutil from 2.6.0 to 2.7.3
- regex from 2017.1.17 to 2018.6.21
- semver from 2.7.5 to 2.8.0
- sortedcontainers from 1.5.7 to 2.0.4
- SQLAlchemy from 1.1.5 to 1.2.8
- sqlparse from 0.2.2 to 0.2.4
- typing from 188.8.131.52 to 3.6.4
- Werkzeug from 0.11.15 to 0.14.1
- xlrd from 1.0.0 to 1.1.0
- (Windows) pypiwin32 from 219 to 223
- (Windows) servicemanager 1.3.0, as below
- (Windows) winerror
If you are using SQL Server, you probably need to upgrade
django-pyodbc-azure(from e.g. 184.108.40.206 to 220.127.116.11, with the command
pip install django-pyodbc-azure==18.104.22.168), or you may see errors from
...\sql_server\pyodbc\base.pylike “Django 2.0.6 is not supported.”
You may also need to update the database connection parameters; e.g. the
DSNkey has become
dsn; see django-pyodbc-azure.
New crate_celery_status command.
Changed to using Celery
--concurrency=1(formerly 4) from
crate_anon.tools.launch_celery, as this should prevent multiple Celery threads doing the same work twice if you call
crate_django_manage resubmit_unprocessed_tasksmore than once. There was a risk that this breaks Flower or other Celery status monitoring (as it did with Celery v3.1.23, but that was a long time ago, and it works fine now.
- NLP fields now support a standard
_srcdatetimefield; this can be NULL, but it’s normally specified as a defining
DATETIMEfield from the source database (since most NLP needs an associated date and it’s far more convenient if this is in the destination database, along with patient ID). It’s specified directly to the
crate_anon.nlp_manager.input_field_config.InputFieldConfigrather than via the
copyfields, since we want a consistent date/time field name in the NLP output even if there is a lack of naming consistency in the source. Search for “new in v0.18.52”.
- Possibly a bug fixed within the NLP manager, in relation to recording of
hashed PKs from tables with non-integer PKs; see
v0.18.53, to 2018-10-24
ClientOtherDetail.NINumberto RiO automatic data dictionary generator as a sensitive (scrub-source) field; they were marked for code anonymisation but not flagged as scrub-source automatically.
- Removed full stop from end of sentence in
email_clinician.htmlbeginning “If you’d like help, please telephone the Research Database Manager…”, since some users copied/pasted the full stop as part of the final e-mail address, which bounced. Clarity more important than grammar in this case.
- NLP adds CRATE version column,
- NLP adds “when fetched from database” column,
- NLP supports “cmm” as an abbreviation for cubic mm (seen in CPFT and as per https://medical-dictionary.thefreedictionary.com/cmm).
cardinal_pythonlib==1.0.25with updates to
document_to_text()parameter handling, then to
- Note that
cardinal_pythonlib==1.0.25also fixes a bug related to SQLAlchemy that manifested as
AttributeError: module 'sqlalchemy.sql.sqltypes' has no attribute '_DateAffinity'.
- Note that
- NLPRP draft to 0.1.0.
django==2.1.2given security vulnerabilities reported in Django versions [2.0, 2.0.8).
mark_safedecorator added to all Django admin site parts with
allow_tags = Trueset (for embedded URLs).
- Improved docstrings.
- Minor bugfixes in
crate_anon.anonymise.anonymisefor fetching values from files.
_addition_onlyDDR flag only permitted on PK fields. (Was only attended to for them in any case!)
- Bugfix to
crate_anon.crateweb.consent.views.validate_letter_request(); these were returning rather than raising. Testing showed that something else was also blocking permission to access such things inappropriately, but fixed anyway!
generate_random_nhsto emphasize what this does.
- Sitewide queries, editable by RDBM.
- Restrict anonymiser to specific patient IDs (for subset generation +/- custom pseudonyms).
Deferred load of clinical team info. (Main research database structure is still loaded at the start; I think my intention was to fail as early as possible if it’s going to fail, and/or ensure that “filling the cache” time is not experienced by the end user).
Fixed packaging bug in
2018-10-21: Fixed bug in:
OperationalError at /mgr_admin/consent/study/ (1054, "Unknown column 'consent_study.p_summary' in 'field list'")
p_summaryto a property.
crate_anon.anonymise.altermethod.AlterMethod._extract_text_func(), pre-check that a file exists (to save time if it doesn’t).
- Bugfix to
cardinal_pythonlib(now v1.0.33) in the autotranslation of SQL Server
- Changed caching for
crate_anon.crateweb.research.research_db_info.SingleResearchDatabaseto make command-line startup faster (at the expense of first-fetch speed).
- Bugfix to
setup.py; Java files were not being distributed properly.
- Performance optimization to query “column filtering” for “show only columns containing no NULL values”, and more generally optimized; should run queries only once per web session.
- Bugfix to
crate_anon.crateweb.research.models.get_executed_researchdb_cursor(), which was double-wrapping a database cursor incorrectly.
- New lithium NLP processor (still needs external validation).
- Bugfix: “cmm” was meant to be accepted as an abbreviation for “cubic mm” as
per v0.18.53 above, but wasn’t. Rechecked all with
crate_anon.nlp_manager.test_all_regexand added additional specific tests for this unit in
crate_anon.nlp_manager.regex_units.test_unit_regexes(). All passing.
- Clinician requests added so that a clinician can request that their patient is included in a study.
- Bugfix to
crate_anon.preprocess.preprocess_rio.main(). Changed ‘progargs.rio’ to ‘rio’.
- Bugfix to
clinician_initiated_contact_request(). Now checks that patient’s consent mode is green or yellow before confirming request.
- New look of website.
- Bugfix to clinician requests. Also now sends a more appropriate email in these cases.
- Updated version of Django in
- Flag on website to check if query has been run since last database update.
- Option of column in anonymiser output specifying when processed.
- Improved the
crate_anon.anonymise.test_extract_text), including errorlevel/return codes to detect text presence.
- Bump to
cardinal_pythonlib==1.0.47. Note that this now raises an exception from
cardinal_pythonlib.extract_text.document_to_text()if a filename is passed and the file doesn’t exist.
- NLP web server based on the NLPRP API.
- Bugfix to the website string finder - ‘text fields’ now includes ‘NVARCHAR(-1)’.
- NLP for glucose cholesterol (LDL, HDL, total), triglycerides, HbA1c (still need external validation).
v0.18.65, 2019-03-04 to 2019-03-25
- NLP for potassium, urea, creatinine, haemoglobin, haematocrit (still need external validation).
- At some point before this: SQL helpers to find drug classes/types (e.g. “atypical antipsychotics”, “SSRIs”), as per JL’s idea of 2018-01-08.
- At some point before this: research query options to show a subset of columns.
- At some point before this: “Clinician asks for a study pack” – create a contact request that’s pre-authorized by a clinician (who might want to pass on the pack themselves or delegate the RDBM to do it).
- Standard site queries now handle the following problem:
- With regular data updates there might be problems with queries returning
different results if rerun a week later, so might be worth returning a
timestamp of some type, like:
MAX(DATE_CREATED) FROM RIO.DBO.Clinical_Documents + MAX(whenprocessedutc)) FROM [RiONLP].[dbo].[crate_nlp_progress] + …
- With regular data updates there might be problems with queries returning different results if rerun a week later, so might be worth returning a timestamp of some type, like:
- Update to
CrateGatePipeline.javato support an option to continue after GATE crashes.
v0.18.67, 2019-03-30 to 2019-03-31
semantic_version; consistent with CamCOPS and better (and not actually used hitherto by CRATE!)
- NLPRP constants and core API.
- Move to Python 3.6 (already the minimum in CPFT), allowing f-strings.
- f-strings. (Note: use Alt-Enter in PyCharm.)
CrateGatePipeline.javasupports continuation after a Java RuntimeException (“bug in GATE code”).
- Creatinine regex supports mg/dl units as well as micromolar.
- Bugfixes to
- PyPI distribution properly contains
- Bugfix to nlp incremental mode.
- Use of tokens in cloud NLP and option not to verify SSL.
- Bugfix to
crate_anon.nlp_manager.cloud_parser.CloudRequestto convert string datetime back to datetime object. (MySQL automatically converts when writing to the database, but MSSQL doesn’t.)
- Only do nlp processing on records with alphanumeric characters.
- Do highlighting only once per query, then save the highlighted version in
an attribute of the
- Changed migrations to make them compatible with SQL Server.
- Long queries are now hidden on website in order to avoid long render time.
crate_anon.nlp_manager.cloud_parser.CloudRequestnow extracts content from GATE processors based on the start and end indexes.
- Option to truncate source data in nlp and to mark truncated records as processed or not.
- Upgrade to
- Bugfix to
client_job_idare obtained from args rather than top-level of the request.
crate_anon.nlp_manager.nlp_manager, open file to write after completing retrieval of requests so if there is a problem you don’t lose all your queue_ids.
- Records will not be sent with no word character.
session.remove()has been added to to
crate_anon.nlp_manager.cloud_parserwon’t crash if one request gives an error. This is so we don’t lose all data if just one request doesn’t work.
crate_anon.nlp_manager.nlp_manager.process_cloud_nlp(), use append file instead of write, so that, is there’s a problem part-way through, we don’t lose all data.
- Downgraded to
SQLAlchemy==1.2.8, which it was before and
django==2.1.9, which is higher than it was before, because the updates where causing clashes with
- Log error messages from server in
- Sending requests to the cloud servers is broken up into blocks so that the database can be written to periodically.
- New sessions for each request on the server-side.
- Microsoft specific bugfix in cloud nlp.
- Commit every n records, where n is specified by the user, in retrieval of cloud requests.
- Used rate limiter.
- Bugfix to
crate_anon.nlp_manager.cloud_parser.CloudRequest.get_nlp_values_internal()so that they don’t try to fish out results for a processor when there are errors.
- Retry after connection failure in
HGBas synonym for haemoglobin in
OPTIONAL_POCelement in several biochemistry/haematology parsers
crate_anon.nlp_manager.parse_haematology.WbcBaseallows “per microlitre” as well as “per cubic mm”.
- logging, rather than
print(), for regex testing
- … then
urllib==1.24.2to avoid a high severity security vulnerability (automatic Github warning; well done, it).
- NLPRP v0.2.0, with schema support.
django==2.1.11(from 2.1.10), Github-prompted security fix.
sqlalchemy==1.3.6(from 1.2.8); needed to go to 1.3.0 (Github-prompted security fix) but we’d noted Windows problems with 1.3.0; looks like SQL Server regression was fixed in 1.3.1 (see https://docs.sqlalchemy.org/en/13/changelog/changelog_13.html) so going to 1.3.6.
pandas), from 2.6.0 (was blocking readthedocs updates).
cardinal_pythonlib==1.0.61(from 1.0.58); bugfix in log probability handling; fix relating to Django
- Bugfix to
crate_anon.nlp_manager.parse_cognitive.MocaValidator; was looking at the mini-ACE instead!
- Abstract base classes in NLP parsers to assist with NLPRP work.
- Comments for NLP output columns (for build-in fields and those specified by destfields).
- Cloud NLP config modularized. Breaking change to existing cloud NLP configs.
- Some code simplification, including classes:
- Moved “verify SSL” option from
--noverifyon the command line to the verify_ssl parameter.
- Parameterized maximum request frequency via rate_limit_hz.
limit_before_writeparameter into max_records_per_request and limit_before_commit.
nlp_webserverfor clarity (since “web” might refer to client or server).
crate_anon.nlp_webserver.settingsso “constants” has no import side-effects
- Removed dependencies:
typing– now using Python 3.6
Werkzeug– no longer in use
- Pinned versions:
- Added requirements:
- Context-sensitive help on the CRATE web site, via
- NLPRP client sets
- Removed reference to Django setting
MANAGERSsince we won’t enable Django’s
BrokenLinkEmailsMiddleware); see https://docs.djangoproject.com/en/dev/internals/deprecation/.
- Experimental: archive system.
cardinal_pythonlib.django.middleware.DisableClientSideCachingMiddlewaresince we may want to do some caching.
- Added standard
tense_textcolumn to NLP classes
- Python NLP:
- CRP value column case changed from
- Creatinine value column renamed from
- HbA1c value column renamed from
- Haematocrit value column case changed from
- Haemoglobin value column case changed from
- CRP value column case changed from
- GATE parser now avoids stripping terminal tabs (now just newlines), removing
error messages saying “Bad chunk, not of length 2”. See
crate_anon.crateweb.research.models.PatientExploreruse is audited.
- NLP web server performance tweaks; database structure changes.
- Remove dependence on
cardinal_pythonlib.rnc_db, which is trivial but gives a warning.
- readthedocs.org problems fixed; see
- environment variable
_SPHINX_AUTODOC_IN_PROGRESS(re errors from docs build environment)
readthedocs.yml(re resource usage)
.inifiles were being ignored (despite being fine on a local Sphinx build) – this was a
- environment variable
v0.18.88 to 0.18.91, 2019-10-06 to 2019-10-07
- We were seeing
BrokenPipeErrorexceptions when very large chunks of text (e.g. 27 Mb) were being sent to GATE processors under Windows. This was due to a bug in the DOCX text extractor. So:
BrokenPipeErrorexceptions now trapped by the GATE and MedEx processors (leading to a log error, a restart of the processor, and a
cardinal_pythonlib==1.0.67, which has improvements to DOCX table extraction;
- right-strip all extracted text
- Bugfix: tools that were unrelated to the NLP web server were importing its settings (so requiring a dummy config file).
- Bugfix in the way that
|||https://bitbucket.org/zzzeek/sqlalchemy/issues/3504; http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#change-3504; http://docs.sqlalchemy.org/en/latest/changelog/changelog_11.html#change-1.1.0b1|
|||See also http://stackoverflow.com/questions/5400275/fast-large-width-non-cryptographic-string-hashing-in-python|
|||https://pypi.python.org/pypi/mmh3/2.2; https://en.wikipedia.org/wiki/MurmurHash; see how it works using the less fast Python version at https://github.com/wc-duck/pymmh3|
|||Possible that we may need to treat this as a minus sign in some contexts later, but this is not implemented yet.|