14.5.11. crate_anon.nlp_manager.cloud_request
crate_anon/nlp_manager/cloud_request.py
Copyright (C) 2015, University of Cambridge, Department of Psychiatry. Created by Rudolf Cardinal (rnc1001@cam.ac.uk).
This file is part of CRATE.
CRATE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
CRATE is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with CRATE. If not, see <https://www.gnu.org/licenses/>.
This module is for sending JSON requests to the NLP Cloud server and receiving responses.
- class crate_anon.nlp_manager.cloud_request.CloudRequest(nlpdef: crate_anon.nlp_manager.nlp_definition.NlpDefinition, debug_post_request: bool = False, debug_post_response: bool = False)[source]
Class to send requests to the cloud processors and process the results.
- __init__(nlpdef: crate_anon.nlp_manager.nlp_definition.NlpDefinition, debug_post_request: bool = False, debug_post_response: bool = False) None [source]
- Parameters
nlpdef –
crate_anon.nlp_manager.nlp_definition.NlpDefinition
- classmethod set_rate_limit(rate_limit_hz: int) None [source]
Creates new methods which are rate limited. Only use this once per run.
Note that this is a classmethod and must be so; if it were instance-based, you could create multiple requests and each would individually be rate-limited, but not collectively.
- class crate_anon.nlp_manager.cloud_request.CloudRequestListProcessors(nlpdef: crate_anon.nlp_manager.nlp_definition.NlpDefinition, **kwargs)[source]
Request to get processors from the remote.
- __init__(nlpdef: crate_anon.nlp_manager.nlp_definition.NlpDefinition, **kwargs) None [source]
- Parameters
nlpdef –
crate_anon.nlp_manager.nlp_definition.NlpDefinition
- get_remote_processors() List[crate_anon.nlp_webserver.server_processor.ServerProcessor] [source]
Returns the list of available processors from the remote. If that list has not already been fetched, or unless it was pre-specified upon construction, fetch it from the server.
- class crate_anon.nlp_manager.cloud_request.CloudRequestProcess(crinfo: CloudRunInfo = None, nlpdef: crate_anon.nlp_manager.nlp_definition.NlpDefinition = None, commit: bool = False, client_job_id: str = None, **kwargs)[source]
Request to process text.
- __init__(crinfo: CloudRunInfo = None, nlpdef: crate_anon.nlp_manager.nlp_definition.NlpDefinition = None, commit: bool = False, client_job_id: str = None, **kwargs) None [source]
- Parameters
crinfo – a
crate_anon.nlp_manager.cloud_run_info.CloudRunInfo
nlpdef – a
crate_anon.nlp_manager.nlp_definition.NlpDefinition
commit – force a COMMIT whenever we insert data? You should specify this in multiprocess mode, or you may get database deadlocks.
client_job_id – optional string used to group together results into one job.
- add_text(text: str, metadata: Dict[str, Any]) None [source]
Adds text for analysis to the NLP request, with associated metadata.
Tests the size of the request if the text and metadata was added, then adds it if it doesn’t go over the size limit and there are word characters in the text. Also checks if we’ve reached the maximum records per request.
- Parameters
text – the text
metadata – the metadata (which we expect to get back later)
:raises -
RecordNotPrintable
if the record contains no printable: characters :raises -RecordsPerRequestExceeded
if the request has exceeded the: maximum number of records per request :raises -RequestTooLong
if the request has exceeded the maximum: length
- check_if_ready(cookies: Optional[http.cookiejar.CookieJar] = None) bool [source]
Checks if the data is ready yet. Assumes queued mode (so
set_queue_id()
should have been called first). If the data is ready, collect it and returnTrue
, else returnFalse
.
- get_nlp_values() Generator[Tuple[Dict[str, Any], crate_anon.nlp_manager.cloud_parser.Cloud], None, None] [source]
Process response data that we have already obtained from the server, generating individual NLP results.
- Yields
(tablename, result, processor)
for each result.- Raises
KeyError –
- static get_nlp_values_gate(processor_data: Dict[str, Any], processor: crate_anon.nlp_manager.cloud_parser.Cloud, metadata: Dict[str, Any], text: str = '') Generator[Tuple[Dict[str, Any], crate_anon.nlp_manager.cloud_parser.Cloud], None, None] [source]
Gets result values from processed GATE data which will originally be in the following format:
{ 'set': set the results belong to (e.g. 'Medication'), 'type': annotation type, 'start': start index, 'end': end index, 'features': {a dictionary of features e.g. 'drug', 'frequency', etc} }
Yields
(output_tablename, formatted_result, processor_name)
.
- static get_nlp_values_internal(processor_data: Dict[str, Any], metadata: Dict[str, Any]) Generator[Tuple[str, Dict[str, Any], str], None, None] [source]
Get result values from processed data from a CRATE server-side.
- Parameters
processor_data – NLPRP results for one processor
metadata – The metadata for a particular document - it would have been sent with the document and the server would have sent it back.
Yields
(output_tablename, formatted_result, processor_name)
.
- process_all() None [source]
Puts the NLP data into the database. Very similar to
crate_anon.nlp_manager.base_nlp_parser.BaseNlpParser.process()
, but deals with all relevant processors at once.
- send_process_request(queue: bool, cookies: Optional[http.cookiejar.CookieJar] = None, include_text_in_reply: bool = True) None [source]
Sends a request to the server to process the text we have stored.
- Parameters
queue – queue the request for back-end processing (rather than waiting for an immediate reply)?
cookies – optional
http.cookiejar.CookieJar
include_text_in_reply – should the server include the source text in the reply?
- class crate_anon.nlp_manager.cloud_request.CloudRequestQueueManagement(nlpdef: crate_anon.nlp_manager.nlp_definition.NlpDefinition, debug_post_request: bool = False, debug_post_response: bool = False)[source]
Request to manage the queue in some way.
- delete_all_from_queue() None [source]
Delete ALL pending requests from the server’s queue. Use with caution.
- crate_anon.nlp_manager.cloud_request.report_processor_errors(processor_data: Dict[str, Any]) None [source]
Should only be called if there has been an error. Reports the error(s) to the log.