prediction.apis package#

Submodules#

prediction.apis.algorithm_client_pulse module#

prediction.apis.algorithm_client_pulse.delete_pulse_responder_dynamic(auth, db, collection, find, info=False)#

Delete a dynamic interaction configuration on the ecosystem-server

Parameters:
  • auth – Authentication token generated by the jwt_access module

  • db – The database containing the configuration

  • collection – The collection containing the configuration

  • find – The search criteria for the configuration in the form of a mongoDB query. The find query should be of the form {“uuid”: “e95bfe26-db16-4e64-9ea5-b7cf3c43f4cb”} where the uuid is the unique identifier of the configuration

prediction.apis.algorithm_client_pulse.generate_cox_ph_data(auth, collection, collection_out, customer_field, database, database_out, date_field, find, info=False)#
prediction.apis.algorithm_client_pulse.generate_forecast(auth, attribute, collection, collectionOut, database, dateattribute, find, historicsteps, steps, info=False)#
prediction.apis.algorithm_client_pulse.get_pulse_responder_messages(auth, db, collection, info=False)#

Deprecated

prediction.apis.algorithm_client_pulse.get_pulse_responder_options(auth, params, info=False)#

Get the options store for a dynamic interaction configuration on the ecosystem-server

Parameters:
  • auth – Authentication token generated by the jwt_access module

  • params – A dictionary of parameters specify the options store to retrieve, Must be a dictionary of the form {“database”: “”, “collection”: “”, “find”: {}, “skip”: 0, “limit”: 0}

prediction.apis.algorithm_client_pulse.get_pulse_responder_profile(auth, db, collection, info=False)#

Deprecated

prediction.apis.algorithm_client_pulse.get_timeline(auth, user, limit, info=False)#

Get the timeline of activities for a user on the ecosystem-server

Parameters:
  • auth – Authentication token generated by the jwt_access module

  • user – The user to get the timeline for

  • limit – The number of activities to return

prediction.apis.algorithm_client_pulse.list_pulse_responder_dynamic(auth, info=False)#

List all dynamic interaction configurations on the ecosystem-server

Parameters:

auth – Authentication token generated by the jwt_access module

prediction.apis.algorithm_client_pulse.process_apriori(auth, colItem, collection, collectionOut, custField, database, dbItem, find, itemField, supportCount, info=False)#
prediction.apis.algorithm_client_pulse.process_arima_forecast(auth, json, info=False)#
prediction.apis.algorithm_client_pulse.process_basket(auth, dbcust, colcust, searchcust, custfield, dbitem, colitem, itemfield, supportcount, info=False)#

Perform apriori basket analysis for specified data sets of customers and items

Parameters:
  • auth – Authentication token generated by the jwt_access module

  • dbcust – The database containing the customer data

  • colcust – The collection containing the customer data

  • searchcust – The search criteria for the customer data in the form of a mongoDB query

  • custfield – The field in the customer data to use for the analysis

  • dbitem – The database containing the item data

  • colitem – The collection containing the item data

  • itemfield – The field in the item data to use for the analysis

  • supportcount – The minimum number of times an item must appear in the data to be included in the analysis

prediction.apis.algorithm_client_pulse.process_directed_graph(auth, graphMeta, graphParam, info=False)#

Analyze graph from data using graph meta data and analysis parameters.

Parameters:
  • auth – Authentication token generated by the jwt_access module

  • graphMeta – Example - {“vertex”:[0,1],”edges”:[{“from”:0,”to”:1}],”from”:0,”source”:”/data/data.csv”,”dotfile”:”/data/data.dot”,”to”:1}

  • graphParam – Example - {“destination”:”Bakeries”,”source”:”12345678”}

prediction.apis.algorithm_client_pulse.process_directed_graph2(auth, db, collection, collection_out, graphMeta, graphParam, info=False)#
prediction.apis.algorithm_client_pulse.process_ecogenetic_network(auth, collection, collectionOut, database, find, graphMeta, graphParam, info=False)#
prediction.apis.algorithm_client_pulse.save_pulse_responder_dynamic(auth, json, info=False)#

Save Dynamic interaction configuration to the ecosystem-server

Parameters:
  • auth – Authentication token generated by the jwt_access module

  • json – The configuration to save in json format. See an example of the required format in the example below

Examples:

acp.save_pulse_responder_dynamic(auth, {
    {
    "name": "telco_segmented_offers",
    "description": "Ecosystem Rewards recommender to select offers for telco customers",
    "date_updated": "2023-12-12T07:43:57.000586Z",
    "batch": "false",
    "score_database": "ecosystem_meta",
    "score_connection": "http://ecosystem-runtime:8091",
    "score_collection": "dynamic_engagement",
    "properties": "predictor.corpora=[{uuid:'e9063bc0-d7ec-4af8-877d-fb4288fe7eac', type:'dynamic_engagement', name:'dynamic_engagement', database:'mongodb', db:'ecosystem_meta', table:'dynamic_engagement', update:true} ,{uuid:'e9063bc0-d7ec-4af8-877d-fb4288fe7eac', type:'dynamic_engagement_options', name:'dynamic_engagement',database:'mongodb', db:'prod_telco_super_rec', table:'telco_segmented_deals_set_up_feature_store_options', update:true}]",
    "feature_store_database": "prod_telco_super_rec",
    "feature_store_collection": "telco_segmented_deals_set_up_feature_store",
    "feature_store_connection": "",
    "options_store_database": "prod_telco_super_rec",
    "options_store_collection": "telco_segmented_deals_set_up_feature_store_options",
    "options_store_connection": "",
    "uuid": "e9063bc0-d7ec-4af8-877d-fb4288fe7eac",
    "randomisation": {
        "approach": "binaryThompson",
        "test_options_across_segment": "",
        "epsilon": 0,
        "success_reward": 0.1,
        "fail_reward": 0.01,
        "prior_success_reward": 0.1,
        "prior_fail_reward": 1,
        "cross_segment_epsilon": 0.05,
        "cache_duration": 600000,
        "processing_window": "0",
        "processing_count": "0",
        "decay_gamma": 1,
        "interaction_count": "0",
        "calendar": "None",
        "batch": "false",
        "learning_rate": 0.25,
        "discount_factor": 0.75,
        "random_action": 0.2,
        "max_reward": 10
    },
    "contextual_variables": {
        "contextual_variable_one_name": "segments",
        "contextual_variable_two_name": "",
        "contextual_variable_one_values": [
        "Default",
        "Apparel",
        "LowSpender"
        ],
        "contextual_variable_two_values": "",
        "contextual_variable_one_from_data_source": true,
        "contextual_variable_two_from_data_source": false,
        "contextual_variable_one_lookup": "ts_segment",
        "contextual_variable_two_lookup": "",
        "offer_key": "offer",
        "offer_values": "",
        "take_up": "",
        "tracking_key": ""
    },
    "virtual_variables": [],
    "lookup_fields": [
        "subs_id",
        "ts_segment",
        "user_id",
        "telco_dt"
    ],
    "batch_settings": {
        "batch_outline": "",
        "pulse_responder_list": "",
        "execution_type": "",
        "database": "",
        "collection": "",
        "threads": 7,
        "find": "{}",
        "database_out": "",
        "collection_out": "",
        "campaign": "",
        "number_of_offers": 3,
        "userid": "",
        "options": "",
        "contextual_variables": "",
        "batchUpdateMessage": ""
    },
    "options": {
        "search": "{}",
        "skip": 0,
        "limit": 100,
        "options": []
    }
    }
})
prediction.apis.algorithm_client_pulse.update_client_pulse_responder(auth, json, info=False)#

prediction.apis.auth_controller module#

prediction.apis.auth_controller.refresh_token(auth, refresh_token, info=False)#
prediction.apis.auth_controller.request_password(auth, json, info=False)#
prediction.apis.auth_controller.reset_password(auth, json, info=False)#
prediction.apis.auth_controller.restore_password(auth, json, info=False)#
prediction.apis.auth_controller.sign_out(auth, info=False)#
prediction.apis.auth_controller.sign_up(auth, json, info=False)#

prediction.apis.data_ingestion_engine module#

prediction.apis.data_ingestion_engine.add_metadocumemnts(auth, meta_documents, info=False)#

Store metadata documents

Parameters:
  • auth – Authentication token generated by the jwt_access module

  • meta_documents – A dictionary specifying the metadata documents to be stored and the location in which they are to be stored. meta_documents should be in the following format: {“database”: “”, “collection”: “”, “document”: {}}

prediction.apis.data_ingestion_engine.delete_ingestmeta(auth, ingest_meta, info=False)#
prediction.apis.data_ingestion_engine.get_databasesmeta(auth, info=False)#
prediction.apis.data_ingestion_engine.get_databasetablecolumnmeta(auth, databasename, tablename, columnname, info=False)#
prediction.apis.data_ingestion_engine.get_databasetablecolumnsmeta(auth, databasename, tablename, info=False)#
prediction.apis.data_ingestion_engine.get_databasetablesmeta(auth, databasename, info=False)#
prediction.apis.data_ingestion_engine.get_ingestmeta(auth, ingest_name, info=False)#
prediction.apis.data_ingestion_engine.get_ingestmetas(auth, info=False)#
prediction.apis.data_ingestion_engine.save_databasetablecolumn(auth, database_table_column_json, info=False)#
prediction.apis.data_ingestion_engine.save_ingestion(auth, ingestion_json, info=False)#

prediction.apis.data_management_engine module#

prediction.apis.data_management_engine.add_document_collection(auth, database, collection, info=False)#
prediction.apis.data_management_engine.add_document_database(auth, database, info=False)#
prediction.apis.data_management_engine.add_documents(auth, json, info=False)#
prediction.apis.data_management_engine.create_document_collection_index(auth, database, collection, index, info=False)#
prediction.apis.data_management_engine.create_presto_sql(auth, connection, sql, info=False)#
prediction.apis.data_management_engine.csv_file_to_json(auth, csv_file, json_file, info=False)#
prediction.apis.data_management_engine.csv_import(auth, database, collection, csv_file, info=False)#
prediction.apis.data_management_engine.csv_import2(auth, database, collection, csv_file, headerline, import_type, info=False)#
prediction.apis.data_management_engine.delete_all_documents(auth, doc_json, info=False)#
prediction.apis.data_management_engine.delete_documents(auth, doc_json, info=False)#
prediction.apis.data_management_engine.drop_document_collection(auth, database, collection, info=False)#
prediction.apis.data_management_engine.drop_document_collection_index(auth, database, collection, index, info=False)#
prediction.apis.data_management_engine.drop_document_database(auth, database, info=False)#
prediction.apis.data_management_engine.dump_document_database(auth, database, collection, folder, option, info=False)#
prediction.apis.data_management_engine.execute_mongo_db_script(auth, json, info=False)#
prediction.apis.data_management_engine.export_documents(auth, filename, filetype, database, collection, field, sort, projection, limit, info=False)#
prediction.apis.data_management_engine.get_cassandra_sql(auth, sql, info=False)#
prediction.apis.data_management_engine.get_cassandra_to_mongo(auth, database, collection, sql, info=False)#

Execute a Cassandra SQL query and ingest the data to a MongoDB collection. The ecosystem server should be configured to connect to the target Cassandra servers.

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using jwt_access.

  • database – Database to ingest the data to

  • collection – Collection to ingest the data to

  • sql – Cassandra SQL query to execute

prediction.apis.data_management_engine.get_data(auth, database, collection, field, limit, projections, skip, info=False)#
prediction.apis.data_management_engine.get_data_aggregate(auth, database, collection, field, projections, aggregate, sort, info=False)#
prediction.apis.data_management_engine.get_data_sort(auth, database, collection, field, limit, projections, skip, sort, info=False)#
prediction.apis.data_management_engine.get_document_collection_indexes(auth, database, collection, info=False)#
prediction.apis.data_management_engine.get_document_db_aggregate2(auth, database, collection, aggregate, field, limit, projections, skip, sort, info=False)#
prediction.apis.data_management_engine.get_document_db_collection_stats(auth, database, collection, info=False)#
prediction.apis.data_management_engine.get_document_db_collections(auth, database, info=False)#
prediction.apis.data_management_engine.get_document_db_find_labels(auth, database, collection, field, projections, skip, info=False)#
prediction.apis.data_management_engine.get_document_db_list(auth, server=None, info=False)#
prediction.apis.data_management_engine.get_document_labels(auth, database, collection, info=False)#
prediction.apis.data_management_engine.get_presto_sql(auth, connection, sql, info=False)#
prediction.apis.data_management_engine.import_documents(auth, database, collection, file_name, file_type, info=False)#
prediction.apis.data_management_engine.post_mongo_db_aggregate_pipeline(auth, json, info=False)#
prediction.apis.data_management_engine.rename_collection(auth, database, collection, new_collection, info=False)#
prediction.apis.data_management_engine.restore_document_database(auth, database, collection, folder, info=False)#
prediction.apis.data_management_engine.update_key_name(auth, database, collection, find, from_key, to_key, info=False)#

prediction.apis.data_munging_engine module#

prediction.apis.data_munging_engine.auto_normalize_all(auth, database, collection, fields, find, normalized_high, normalized_low, info=False)#
prediction.apis.data_munging_engine.concat_columns(auth, databasename, collection, attribute, info=False)#
prediction.apis.data_munging_engine.concat_columns2(auth, database, collection, attribute, separator, info=False)#
prediction.apis.data_munging_engine.delete_key(auth, db, collection, attribute, find, info=False)#
prediction.apis.data_munging_engine.delete_many_documents(auth, db, collection, find, info=False)#
prediction.apis.data_munging_engine.enrich_date(auth, database, collection, attribute, info=False)#
prediction.apis.data_munging_engine.enrich_date2(auth, database, collection, attribute, find, info=False)#
prediction.apis.data_munging_engine.enrich_fragments(auth, database, collection, attribute, strings, info=False)#
prediction.apis.data_munging_engine.enrich_fragments2(auth, database, collection, attribute, strings, find, info=False)#
prediction.apis.data_munging_engine.enrich_location(auth, database, collection, attribute, info=False)#
prediction.apis.data_munging_engine.enrich_mcc(auth, database, collection, attribute, find, info=False)#
prediction.apis.data_munging_engine.enrich_sic(auth, database, collection, attribute, find, info=False)#
prediction.apis.data_munging_engine.enum_convert(auth, database, collection, attribute, info=False)#
prediction.apis.data_munging_engine.fill_values(auth, database, collection, find, attribute, value, info=False)#
prediction.apis.data_munging_engine.fill_zeros(auth, database, collection, attribute, info=False)#
prediction.apis.data_munging_engine.flatten_document(auth, db, collection, attribute, find, info=False)#
prediction.apis.data_munging_engine.foreign_key_aggregator(auth, database, collection, attribute, search, mongodbf, collectionf, attributef, fields, info=False)#
prediction.apis.data_munging_engine.foreign_key_lookup(auth, database, collection, attribute, search, mongodbf, collectionf, attributef, fields, info=False)#
prediction.apis.data_munging_engine.generate_features(auth, database, collection, featureset, categoryfield, datefield, numfield, groupby, find, info=False)#
prediction.apis.data_munging_engine.generate_features_normalize(auth, database, collection, find, inplace, normalized_high, normalized_low, numfields, info=False)#
prediction.apis.data_munging_engine.generate_time_series_features(auth, categoryfield, collection, database, datefield, featureset, find, groupby, numfield, startdate=None, windowsize=1, info=False)#
prediction.apis.data_munging_engine.get_categories(auth, database, collection, categoryfield, find, total, info=False)#
prediction.apis.data_munging_engine.get_categories_ratio(auth, database, collection, categoryfield, find, total, info=False)#
prediction.apis.data_munging_engine.munge_transactions(auth, munging_step, project_id, info=False)#
prediction.apis.data_munging_engine.munge_transactions_aggregate(auth, munging_step, project_id, info=False)#
prediction.apis.data_munging_engine.nlp_worker(auth, database, collection, database_out, collection_out, attribute, find, model='original', summarization_max=10, summarization_min=5, transformer='t5-small', model_type='nlp_b5_base', info=False)#
prediction.apis.data_munging_engine.personality_enrich(auth, category, collection, collectionOut, database, find, groupby, info=False)#
prediction.apis.data_munging_engine.predicition_enrich(auth, database, collection, search, sort, predictor, predictor_label, attributes, info=False)#
prediction.apis.data_munging_engine.prediction_enrich_fast(auth, database, collection, search, sort, predictor, predictor_label, attributes, skip, limit, info=False)#
prediction.apis.data_munging_engine.prediction_enrich_fast_post(auth, json, info=False)#
prediction.apis.data_munging_engine.process_client_pulse_reliability(auth, collection, collectionOut, database, find, groupby, mongoAttribute, typeName, info=False)#
prediction.apis.data_munging_engine.process_range(auth, db, collection, find, attribute, new_attribute, rules, info=False)#

prediction.apis.deployment_management module#

prediction.apis.deployment_management.add_network_node(auth, database, collection, node_value, node_api_params)#

Add a new network node to a network runtime configuration. Will replace existing network nodes with the same value.

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using jwt_access.

  • database – The database to store the configuration in.

  • collection – The collection to store the configuration in.

  • node_value – The value of the node to add. The value is used by the network runtime to determine which node should be called.

  • node_api_params – The parameters to use when calling the node. Should be a dictionary of the parameters to pass to the node. These parameters will overridde the parameters of the same name passed through the api call.

prediction.apis.deployment_management.create_deployment(auth, project_id, deployment_id, description, plugin_post_score_class, version, mongo_connect, plugin_pre_score_class='', budget_tracker='default', project_status='experiment', complexity='Low', performance_expectation='High', model_configuration='default', setup_offer_matrix='default', multi_armed_bandit='default', whitelist='default', model_selector='default', pattern_selector='default', logging_collection_response='ecosystemruntime_response', logging_collection='ecosystemruntime', logging_database='logging', scoring_engine_path_dev='http://ecosystem-runtime:8091', scoring_engine_path_test='http://ecosystem-runtime2:8091', scoring_engine_path_prod='http://ecosystem-runtime3:8091', models_path='/data/deployed/', data_path='/data/', build_server_path='', git_repo_path_branch='', download_path='', git_repo_path='', parameter_access='default', corpora='default', extensive_validation=False)#

Create or update a deployment linked to an existing project.

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using the jwt_access package.

  • project_id – The name of the project to add the deployment step to.

  • deployment_id – The name of the deployment step that is to be created.

  • description – Description of the deployment step

  • version – The version of the deployment step being created. The combination of version and deployment_id cannot already exists within the deployment, i.e. you cannot overwrite an existing deployment

  • project_status – Specifies the environment to which the deployment should be sent when it is pushed. The allowed values are experiment, validate, production, disable.

  • plugin_pre_score_class – The name of the pre score logic class to be used in the runtime. Only default classes can be selected here. To create custom classes please use the ecosystem-runtime-locabuild repo or edit the classes in the workbench. The allowed values are PrePredictCustomer.java

  • plugin_post_score_class – The name of the post score logic class to be used in the runtime. Only default classes can be selected here. To create custom classes please use the ecosystem-runtime-locabuild repo or edit the classes in the workbench. The allowed values are PostScoreBasic.java, PostScoreRecommender.java, PlatformDynamicEngagement.java, PostScoreRecommenderOffers.java, PostScoreRecommenderMulti.java and PostScoreNetwork.java

  • budget_tracker – A dictionary of parameters required for managing the budget tracker functionality.

  • complexity – Indicate the expected complexity of the deployment, allowed values are Low, Medium and High

  • performance_expectation – Indicate the expected performance of the deployment, allowed values are Low, Medium and High

  • model_configuration – A dictionary of the parameters specifying the models used in the project. The key item in the dictionary is models_load - a comma separated string of the names of the models to be used in the deployment. model_note and model_outline fields can also be added for tracking purposes

  • setup_offer_matrix – A dictionary of parameters specifying the location of the offer matrix - a dataset containing information about the offers that could be recommended. The dictionary must contain a datasource, database, collection and offer_lookup_id. Datasource can be one of mongodb, white or presto. Database and collection specify the location of the offer_matrix in the datasource. Offer_lookup_id is the name of the column which contains the unique identifier for the offers - allowed values are offer, offer_name and offer_id

  • multi_armed_bandit – A dictionary specifying the dynamic recommender behavior of the deployment. The dictionary must contain epsilon, duration and pulse_responder_id. epsilon is a portion of interactions that are presented with random results and should be a number between 0 and 1. duration is the period for which recommendations are cached in milliseconds TODO <Check this with Jay> . pulse_responder_id is the uuid of a Dynamic Interaction configuration, if not Dynamic Interaction configuration is being linked set this to “”

  • whitelist – A dictionary of parameters specifying the location of the whitelist - a dataset of customers and the list of offers for which they are eligible. The data set should contain two fields; customer_key and white_list. customer_key is the unique customer identifier and white_list is a list of offer_names for which the customer is eligible. The dictionary must contain a datasource, database and collection. Datasource can be one of mongodb, cassandra or presto. Database and collection specify the location of the whitelist in the datasource.

  • model_selector – A dictionary of parameters specifying the behavior of the model selector functionality. The model selector allows different models to be used based on the value of a field in the specified data set. The dictionary must contain datasource, database, table_collection, selector_column, selector and lookup. Datasource can be one of mongodb, cassandra or presto. Database and collection specify the location of the model_selector dataset in the datasource. The selector_column is the name of the column in the dataset which is used to select between the different models. Lookup is a dictionary with the structure {“key”:”customer”,”value”:123 or ‘123’,”fields’’:”selector_column”}, where key is the field containing the unique customer identifier, value specified the type of the identifier as either a string (‘123’) or a number (123) and fields is the name of the selector column. Selector is the rule set used to choose models based on the values in the selector column. Selector is a dictionary with the format {“key_value_a”:[0],”key_value_b”:[1], …} where the keys are the values of the fields in the selector column used to choose different models and the values are the indices of the model to be used, with the order as specified in the model_configuration argument

  • pattern_selector – A dictionary containing the parameters defining the behavior of the pattern selector. The dictionary contains two parameters; pattern and duration. pattern is a comma separated list of numbers which specifies the intervals at which customers are able to receive updated offers. duration defines the time intervals specified in the pattern parameter

  • parameter_access – A dictionary specifying the location from which customer data should be looked up. parameter_access should contain lookup, datasource, database, table_collection, lookup, lookup_defaults, fields, lookup_fields, create_virtual_variables and virtual_variables. lookup is a dictionary with the structure {“key”:”customer”,”value”:123 or ‘123’}, where key is the field containing the unique customer identifier, value specified the type of the identifier as either a string (‘123’) or a number (123). datasource can be one of mongodb, cassandra or presto. database and table_collection specify the location of the customer lookup in the datasource. fields is a comma separated list of the fields that should be read from the customer lookup. lookup_defaults are the default values to be used if the customer lookup fails, set to “” to not specify defaults. lookup_fields is the fields parameter in a list form ordered alphabetically create_virtual_variable is True if virtual variables are defined and False if not, virtual variables are defined by segmenting or combining fields from the customer lookup for us in the deployment. virtual_variables is a dictionary defining the virtual variables, which has the following form.

  • corpora – A list of additional datasets that are read by the deployment. corpora is a list of dictionaries where each dictionary gives the details of a data set. The dictionaries must have the following keys; database (mongodb or cassandra), db (the database containing the corpora), table (the collection containing the corpora), name (the name of the corpora used in the deployment) and type (static, dynamic or experiment). The dictionary can optionally contain a key field which, if present, is used as a lookup for each row or the corpora, where the default is to have the rows loaded as an array. The type field in the dictionary specifies how the corpora is loaded. A static type is loaded at deployment, a dynamic type is loaded at each prediction and experiment is a special type used for configuring network runtimes

  • logging_database – The mongo database where the deployment logs will be stored

  • logging_collection – The mongo collection where the predictions presented will be stored

  • logging_collection_response – The mongo collection where the customer responses to the predictions will be stored

  • mongo_connect – The connection string to the mongo database used by the deployment

  • scoring_engine_path_dev – The url of the container to send the configuration to when the project status is experiment

  • scoring_engine_path_test – The url of the container to send the configuration to when the project status is validate

  • scoring_engine_path_prod – The url of the container to send the configuration to when the project status is production

  • models_path – The folder in the container where the models will be stored

  • data_path – The folder in the container where the generic data used by the container will be stored

  • build_server_path – The url of the build server to be used if customer logic is built into the container and a new container needs to be built containing said logic

  • git_repo_path – The git repo to store the customer logic

  • git_repo_path_branch – The branch to use for the repo specified in git_repo_path_branch

  • download_path – The url on Docker Hub where the built container will be pushed

  • extensive_validation – Indicator of whether potentially time consuming validation should be run before the deployment is created. This additional validation is checking whether the fields in the model_parameter are present in the linked collection and vice-versa

Returns:

The deployment configuration.

EXAMPLES:

Deployment creation example for an online learning configuration with an offer matrix, customer lookup, virtual variables and a dynamic corpora specifying a default offer.

deployment_step = dm.create_deployment(
     auth_local,
     project_id=project_id,
     deployment_id="demo_online_learning",
     description="Demonstration of online learning deployment",
     plugin_pre_score_class="",
     plugin_post_score_class="PostScoreDemoDynamic.java",
     version="001",
     project_status="experiment",
     scoring_engine_path_dev="http://ecosystem-runtime:8091",
     multi_armed_bandit={
                 "epsilon": 0,
                 "duration": 0,
                 "pulse_responder_uuid": online_learning_uuid
                 },
     setup_offer_matrix={
                 "offer_lookup_id": "offer_name",
                 "database": "recommender",
                 "table_collection": "online_offer_matrix",
                 "datasource": "mongodb"
             },
     corpora=[{ "name":"default_offer","database":"mongodb","db":"recommender","table":"online_default_offer","type":"static"}],
     parameter_access={
                 "lookup": {"value": 123,"key": "customer"},
                 "create_virtual_variables":True,
                 "lookup_defaults": "",
                 "database": "recommender",
                 "table_collection": "customer_feature_store",
                 "lookup_fields": ["customer","revenue","activity","age",...],
                 "datasource": "mongodb",
                 "virtual_variables": [
                     {
                         "name": "revenue_category",
                         "default": "gt-500",
                         "type": "discretize",
                         "original_variable": "revenue",
                         "fields": [],
                         "buckets": [
                             {"from": 0,"label": "lt-50","to": 50},
                             {"from": 50,"label": "50-250","to": 250},
                             {"from": 250,"label": "250-500","to": 500}
                         ]
                     },
                     {
                         "name": "activity_category",
                         "default": "gt-15",
                         "type": "discretize",
                         "original_variable": "activity",
                         "fields": [],
                         "buckets": [
                             {"from": 0,"label": "lt-15","to": 15}
                         ]
                     }
                 ]
             }
)
prediction.apis.deployment_management.create_network_configuration(auth, database, collection, network_collection, name, type, switch_key='', selector_splits=None, selector_groups=None, selector_api_params=None)#

Create a new network configuration and store it in mongo. Existing configurations stored in the same location will be overwritten.

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using jwt_access.

  • database – The database to store the configuration in.

  • collection – The collection to store the configuration in.

  • network_collection – The collection to store the network in.

  • name – The name of the network configuration.

  • type – The type of the network configuration. Allowed values are lookup, lookup_passthrough, experiment_selector, no_logging_router and model_selector.

  • switch_key – The key to switch on for the lookup, lookup_passthrough and no_logging_router types.

  • selector_splits – The distribution splits for the experiment_selector type. Should be a list of numbers that define the allocation of customers to an experiment group. For example selector_splits=[0.2,0.8] would have a 20% probability of assigning customers to group 1 a 60% chance of assinging customers to group 2 and and 20% chance of assigning customers to group 3.

  • selector_groups – The groups to allocate customers to for the experiment_selector type. Should be a list of the runtime campaign names for the network nodes.

  • selector_api_params – The parameters used when calling the runtime for the model_selector type. Should be a dictionary of the parameters to pass to the runtime.

prediction.apis.deployment_management.create_openshift_enpoint(name, openshift_server, oc_path, oc_user, version='0.9.4.3', environment_variables=None, port=8999, namespace='bdp-rts-dev', replicas=1, volume='tc-bdp-rts-dev-disk', cassandra_path=None, model_path=None, use_oc=True)#

Create an OpenShift deployment for the ecosystem-runtime using a default configuration and expose the deployment as a route to allow the configuration to be updated.

Parameters:
  • name – The name of the deployment

  • openshift_server – The OpenShift server to deploy the deployment to

  • oc_path – The path to the oc executable

  • oc_user – The OpenShift user to use when connecting to oc

  • version – The version of the ecosystem-runtime container

  • environment_variables – A list of environment variables to set in the deployment

  • port – The port to expose in the deployment

  • namespace – The namespace to deploy the deployment in

  • replicas – The number of replicas in the deployment

  • volume – The volume to mount in the deployment

  • cassandra_path – The path to the cassandra configuration file

  • model_path – A list of the paths to the model files

  • use_oc – A boolean indicating whether to use the oc executable to create the deployment

Returns:

The endpoint of the deployment

prediction.apis.deployment_management.create_project(auth, project_id, project_description, project_type, purpose, project_start_date, project_end_date, data_science_lead, data_lead, module_name='', module_module_owner='', module_description='', module_created_by='', module_version='')#

Create a new project

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using jwt_access.

  • project_id – The name of the project to be created.

  • project_description – Description of the project.

  • project_type – The type of the project.

  • purpose – The purpose of the project.

  • project_start_date – The start date of the project.

  • project_end_date – The end date of the project.

  • data_science_lead – The data science lead of the project.

  • data_lead – The data lead of the project.

  • module_name – The name of the module.

  • module_module_owner – The owner of the module.

  • module_description – The description of the module.

  • module_created_by – The creator of the module.

  • module_version – The version of the module.

prediction.apis.deployment_management.define_deployment_model_configuration(models_to_load, models_note='', models_outline='Recommender')#

Define the model configuration structure for a deployment step

Parameters:
  • models_to_load – A list of model names to be loaded

  • models_note – A note to store with the model

  • models_outline – The structure of the models

prediction.apis.deployment_management.define_deployment_model_selector(database, table_collection, datasource, lookup_key, lookup_type, selector_column, selector, model_configuration, lookup_default='')#

Define the model selector structure for a deployment step

Parameters:
  • database – The database to be used for lookup

  • table_collection – The table or collection to be used for lookup

  • datasource – The type of datasource to be used for lookup. Allowed values are mongodb, cassandra and presto

  • lookup_key – The key field to be used for lookup in the model selector structure

  • lookup_type – The type of lookup key in the model selector data set. Allowed values are string and int

  • selector_column – The column in the lookup database to be used for selection

  • selector – A dictionary specifying the model to use for each value of the selector column. The keys are the values of the selector column and the values a list of the model names.

  • model_configuration – The model configuration structure to be used for the deployment step

  • lookup_default – The default value for the lookup key

prediction.apis.deployment_management.define_deployment_multi_armed_bandit(epsilon, duration=0, dynamic_interaction_uuid='')#

Define the multi armed bandit structure for a deployment step

Parameters:
  • epsilon – The epsilon value for the deployment

  • duration – The cache duration for the deployment

  • dynamic_interaction_uuid – The uuid of the dynamic interaction configuration to be used for the deployment

prediction.apis.deployment_management.define_deployment_parameter_access(auth, lookup_key, lookup_type, database, table_collection, datasource, lookup_fields=None, lookup_default='', defaults='', virtual_variables='')#

Define the parameter access structure for a deployment step

Parameters:
  • lookup_key – The key field to be used for lookup in the parameter access structure

  • lookup_type – The type of lookup key in the lookup data set. Allowed values are string and int

  • database – The database to be used for lookup

  • table_collection – The table or collection to be used for lookup

  • lookup_fields – A list of fields to be returned from the lookup. If not specified, the list of fields will be looked up from the specified data source and all fields will be returned

  • datasource – The type of datasource to be used for lookup. Allowed values are mongodb, cassandra and presto

  • lookup_default – The default value for the lookup key

  • defaults – A list of default values for fields in the lookup data set

  • virtual_variables – A list of virtual variables to be used in the parameter access structure. Virtual variables can be defined using the define_deployment_virtual_variable function.

prediction.apis.deployment_management.define_deployment_setup_offer_matrix(database, table_collection, datasource, offer_lookup_id)#

Define the setup offer matrix structure for a deployment step

Parameters:
  • database – The database containing the offer matrix

  • table_collection – The collection or table containing the offer matrix

  • datasource – The type of datasource containing the offer matrix. Allowed values are mongodb, cassandra and presto

  • offer_lookup_id – The name of the column containing the unique identifier for the offers. Allowed values are offer, offer_id and offer_name

prediction.apis.deployment_management.define_deployment_virtual_variable(name, original_variable, default, variable_type, variables='', buckets='')#

Define a virtual variable to be used in a parameter access structure in a deployment step. Virtual variables definitions should be stored in a list that is added to the parameter access structure.

Parameters:
  • name – The name of the virtual variable

  • original_variable – The name of the original variable that the virtual variable is based on

  • default – The default value of the virtual variable

  • variable_type – The type of the virtual variable. Allowed values are discretize and concatenate

  • variables – A list of variables to be concatenated. Required if variable_type is concatenate

  • buckets – A list of buckets to be used for discretization. Should be a list of dictionaries or the form [{“from”:0,”to”:15,”label”:”bucket_1”}] Required if variable_type is discretize

prediction.apis.deployment_management.define_deployment_whitelist(database, table_collection, datasource)#

Define the whitelist structure for a deployment step. The whitelist dataset must contain a customer_key column and a whitelist column which is a comma separated list of eligible offer names

Parameters:
  • database – The database containing the whitelist

  • table_collection – The table or collection containing the whitelist

  • datasource – The type of datasource containing the whitelist. Allowed values are mongodb, cassandra and presto

prediction.apis.deployment_management.distribution_test(auth, auth_runtime, testing_config, output_level)#

Test the runtime by making multiple calls and checking the distribution of the responses. Limited to calls for 10000 customers

Parameters:
  • auth – The authentication object for the ecosystem-server

  • auth_runtime – The authentication object for the runtime

  • testing_config – The testing configuration

  • output_level – The level of output to print. Options are “quiet” and “verbose”

Returns:

True if the tests pass, False otherwise

prediction.apis.deployment_management.get_api_endpoint_code_default()#
prediction.apis.deployment_management.get_budget_tracker_default()#
prediction.apis.deployment_management.get_column_list(auth, database, table_collection, datasource)#

Get a list of columns in a table or collection in a database.

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using the jwt_access package.

  • database – The name of the database to get the columns from

  • table_collection – The name of the table or collection to get the columns from

  • datasource – The type of datasource to get the columns from. Allowed values are mongodb and cassandra

prediction.apis.deployment_management.get_corpora_default()#
prediction.apis.deployment_management.get_deployment_step(auth, project_id, deployment_id, version)#

Get a deployment step from a project

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using the jwt_access package.

  • project_id – The name of the project containing the deployment step

  • deployment_id – The name of the deployment step to get

  • version – The version of the deployment step to get

prediction.apis.deployment_management.get_model_configuration_default()#
prediction.apis.deployment_management.get_model_selector_default()#
prediction.apis.deployment_management.get_multi_armed_bandit_default()#
prediction.apis.deployment_management.get_openshift_deployment_config(name, version, environment_variables, namespace='bdp-rts-dev', volume='tc-bdp-rts-dev-disk', replicas=1, port=8999)#

Create a deployment configuration yaml file for an OpenShift deployment. Will create a yaml file with the deployment configuration in the working directory.

Parameters:
  • name – The name of the deployment

  • version – The version of the ecosystem-runtime container

  • environment_variables – A list of environment variables to set in the deployment

  • namespace – The namespace to deploy the deployment in

  • volume – The volume to mount in the deployment

  • replicas – The number of replicas in the deployment

  • port – The port to expose in the deployment

Returns:

The deployment configuration

prediction.apis.deployment_management.get_openshift_service_ips(openshift_server, oc_path, oc_user, use_oc=True)#

Get the IP addresses of the services running in OpenShift. If there are multiple services, the IP addresses of each service will be returned.

Parameters:
  • openshift_server – The OpenShift server from which the services should be retrieved

  • oc_path – The path to the oc executable

  • oc_user – The OpenShift user to use when connecting to oc

  • use_oc – A boolean indicating whether to use the oc executable to get the IP addresses of the services

prediction.apis.deployment_management.get_parameter_access_default()#
prediction.apis.deployment_management.get_pattern_selector_default()#
prediction.apis.deployment_management.get_post_score_code(post_score, project_details)#
prediction.apis.deployment_management.get_pre_score_code(pre_score, project_details)#
prediction.apis.deployment_management.get_setup_offer_matrix_default()#
prediction.apis.deployment_management.get_whitelist_default()#

Link collections to a project

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using jwt_access.

  • project_id – The name of the project to link the collections to.

  • collections – The collections to link to the project in the format [{“database”:”linked_database”,”collection”:”linked_collection”}]

Link dynamic interactions to a project

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using jwt_access.

  • project_id – The name of the project to link the interactions to.

  • interactions – The interactions to link to the project in the format [{“uuid”:”f4990ecd-d438-4260-85ae-fc1fd915a266}]

prediction.apis.deployment_management.remove_network_node(auth, database, collection, node_value)#

Remove a network node from a network runtime configuration.

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using jwt_access.

  • database – The database to store the configuration in.

  • collection – The collection to store the configuration in.

  • node_value – The value of the node to remove.

prediction.apis.deployment_management.single_test_calls(auth_runtime, testing_config, output_level)#

Test the runtime by making individual calls and checking the response against user defined criteria

param: auth_runtime: The authentication object for the runtime param: testing_config: The testing configuration param: output_level: The level of output to print. Options are “quiet” and “verbose”

return: True if the tests pass, False otherwise

prediction.apis.deployment_management.tail_openshift_logs(name, openshift_server, oc_path, oc_user, lines, use_oc=True)#

Tail the logs of an ecosystem-runtime deployment in OpenShift. If there are multiple pods for the deployment, the logs of each pod will be tailed.

Parameters:
  • name – The name of the deployment

  • openshift_server – The OpenShift server where the deployment is running

  • oc_path – The path to the oc executable

  • oc_user – The OpenShift user to use when connecting to oc

  • lines – The number of lines to tail from the log

  • use_oc – A boolean indicating whether to use the oc executable to tail the logs

prediction.apis.deployment_management.test_deployment(auth, auth_runtime, project_id, deployment_id, version, output_level='quiet')#

Test a deployment using the testing configuration saved for the deployment

Parameters:
  • auth – The authentication object for the ecosystem-server

  • auth_runtime – The authentication object for the runtime

  • project_id – The project_id of the deployment

  • deployment_id – The deployment_id of the deployment

  • version – The version of the deployment

  • output_level – The level of output to print. Options are “quiet” and “verbose”

prediction.apis.deployment_management.udate_properties_and_refresh(name, openshift_server, oc_path, oc_user, properties=None, port=8999, use_oc=True)#

Update the properties of an ecosystem-runtime deployment in OpenShift and refresh the deployment to apply the changes.

Parameters:
  • name – The name of the deployment

  • openshift_server – The OpenShift server where the deployment is running

  • oc_path – The path to the oc executable

  • oc_user – The OpenShift user to use when connecting to oc

  • properties – The properties to push to the ecosystem-runtime

  • port – The port to exposed in the deployment

  • use_oc – A boolean indicating whether to use the oc executable to update the deployment

prediction.apis.ecosystem_generation_engine module#

prediction.apis.ecosystem_generation_engine.generate_build(auth, json, info=False)#
prediction.apis.ecosystem_generation_engine.get_build(auth, uuid, info=False)#
prediction.apis.ecosystem_generation_engine.process_build(auth, json, info=False)#
prediction.apis.ecosystem_generation_engine.process_push(auth, json, info=False)#

Push a deployment to the ecosystem-runtime

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using jwt_access.

  • json – The deployment config to be pushed to the ecosystem-runtime.

prediction.apis.ecosystem_home module#

prediction.apis.ecosystem_home.fling(auth, message, info=False)#
prediction.apis.ecosystem_home.get_v1_health(auth, info=False)#
prediction.apis.ecosystem_home.ping(auth, message, info=False)#
prediction.apis.ecosystem_home.post_v1_health(auth, notification, info=False)#

prediction.apis.ecosystem_main module#

prediction.apis.ecosystem_main.create_profile(auth, json, info=False)#
prediction.apis.ecosystem_main.profiles(auth, info=False)#

prediction.apis.ecosystem_user_profiles module#

prediction.apis.ecosystem_user_profiles.activities(auth, user, info=False)#
prediction.apis.ecosystem_user_profiles.archive(auth, userid, info=False)#
prediction.apis.ecosystem_user_profiles.post_activity(auth, json, info=False)#
prediction.apis.ecosystem_user_profiles.profile(auth, userid, info=False)#
prediction.apis.ecosystem_user_profiles.validate(auth, userid, password, info=False)#

prediction.apis.functions module#

prediction.apis.functions.get_list_of_fields(db, collection)#
prediction.apis.functions.save_file_as_userframe(auth, data_file, feature_store, user_name)#

prediction.apis.interactions module#

prediction.apis.online_learning_management module#

prediction.apis.online_learning_management.create_online_learning(auth, name, description, feature_store_collection, feature_store_database, options_store_database, options_store_collection, contextual_variables_offer_key, score_connection='http://ecosystem-runtime:8091', score_database='ecosystem_meta', score_collection='dynamic_engagement', algorithm='ecosystem_rewards', options_store_connection='', batch='false', feature_store_connection='', contextual_variables_contextual_variable_one_from_data_source=False, contextual_variables_contextual_variable_one_lookup='', contextual_variables_contextual_variable_one_name='', contextual_variables_contextual_variable_two_from_data_source=False, contextual_variables_contextual_variable_two_name='', contextual_variables_contextual_variable_two_lookup='', contextual_variables_tracking_key='', contextual_variables_take_up='', batch_database_out='', batch_collection_out='', batch_threads=1, batch_collection='', batch_userid='', batch_contextual_variables='', batch_number_of_offers=1, batch_database='', batch_pulse_responder_list='', batch_find='{}', batch_options='', batch_campaign='', batch_execution_type='', randomisation_calendar='None', randomisation_test_options_across_segment='', randomisation_processing_count=1000, randomisation_discount_factor=0.75, randomisation_batch='false', randomisation_prior_fail_reward=0.1, randomisation_cross_segment_epsilon=0, randomisation_success_reward=1, randomisation_interaction_count='0', randomisation_epsilon=0, randomisation_prior_success_reward=1, randomisation_fail_reward=0.1, randomisation_max_reward=10, randomisation_cache_duration=0, randomisation_processing_window=86400000, randomisation_random_action=0.2, randomisation_decay_gamma='1', randomisation_learning_rate=0.25, randomisation_missing_offers='none', randomisation_training_data_source='feature_store', virtual_variables=None, dynamic_eligibility=None, replace=False, update=False, create_options_index=True, create_covering_index=True)#

Create a new online learning configuration.

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using the jwt_access package.

  • name – The name of the online learning configuration.

  • description – The description of the online learning configuration.

  • feature_store_collection – The collection containing the setup feature store.

  • feature_store_database – The database containing the setup feature store.

  • options_store_database – The database containing the options store.

  • options_store_collection – The collection containing the options store.

  • contextual_variables_offer_key – The key in the setup feature store collection that contains the offer.

  • score_connection – Used when batch processing is enabled. The connection string to the runtime engine to use for batch processing.

  • score_database – The database where the online learning configuration is stored

  • score_collection – The collection where the online learning configuration is stored

  • algorithm – The algorithm to use for the online learning configuration. Currently only “ecosystem_rewards”, “bayesian_probabilistic” and “q_learning” are supported.

  • options_store_connection – The connection string to the options store.

  • batch – A boolean indicating whether batch processing should be enabled.

  • feature_store_connection – The connection string to the setup feature store.

  • contextual_variables_contextual_variable_one_from_data_source – A boolean indicating whether the first contextual variable should be read from the deployment customer lookup.

  • contextual_variables_contextual_variable_one_name – The field in the setup feature store to be used for the first contextual variable.

  • contextual_variables_contextual_variable_one_lookup – The key in the deployment customer lookup that contains the first contextual variable.

  • contextual_variables_contextual_variable_two_name – The field in the setup feature store to be used for the second contextual variable.

  • contextual_variables_contextual_variable_two_lookup – The key in the deployment customer lookup that contains the second contextual variable.

  • contextual_variables_contextual_variable_two_from_data_source – A boolean indicating whether the second contextual variable should be read from the deployment customer lookup.

  • contextual_variables_tracking_key – The field in the setup feature store to be used for the tracking key.

  • contextual_variables_take_up – The field in the setup feature store to be used for the take-up.

  • batch_database_out – The database to store the batch output in.

  • batch_collection_out – The collection to store the batch output in.

  • batch_threads – The number of threads to use for batch processing.

  • batch_collection – The collection to read the batch data from.

  • batch_userid – The user to be passed to the batch runtime.

  • batch_contextual_variables – The contextual variables to be used in the batch processing.

  • batch_number_of_offers – The number of offers to be used in the batch processing.

  • batch_database – The database to read the batch data from.

  • batch_pulse_responder_list – The list of runtimes to be used in the batch processing.

  • batch_find – The query to be used to find the batch data.

  • batch_options – The options to be used in the batch processing.

  • batch_campaign – The campaign to be used in the batch processing.

  • batch_execution_type – The execution type to be used in the batch processing. Allowed values are “internal” and “external”.

  • randomisation_calendar – The calendar to be used.

  • randomisation_test_options_across_segment – Boolean variable indicating whether offers should be tested outside of their allowed contextual variable segments.

  • randomisation_processing_count – The number of interactions to be processed.

  • randomisation_discount_factor – The discount factor to be used in the randomisation.

  • randomisation_batch – Boolean variable indicating whether batch processing should be enabled.

  • randomisation_prior_fail_reward – The prior fail reward to be used in the randomisation.

  • randomisation_cross_segment_epsilon – The cross segment epsilon to be used in the randomisation.

  • randomisation_success_reward – The success reward to be used in the randomisation.

  • randomisation_interaction_count – The number of interactions to be used in the randomisation.

  • randomisation_epsilon – The epsilon to be used in the randomisation.

  • randomisation_prior_success_reward – The prior success reward to be used in the randomisation.

  • randomisation_fail_reward – The fail reward to be used in the randomisation.

  • randomisation_max_reward – The maximum reward to be used in the randomisation.

  • randomisation_cache_duration – The cache duration to be used in the randomisation.

  • randomisation_processing_window – The processing window to be used in the randomisation.

  • randomisation_random_action – The random action to be used in the randomisation.

  • randomisation_decay_gamma – The decay gamma to be used in the randomisation.

  • randomisation_learning_rate – The learning rate to be used in the randomisation.

  • randomisation_missing_offers – The approach used to add scores for offers not present in the training set for the bayesian probabilistic algorithm. Allowed values are “none” and “uniform”.

  • randomisation_training_data_source – The data source to be used for training the q-learning algorithm. Allowed values are “feature_store” and “logging”.

  • virtual_variables – A list of virtual variables to be used in the online learning configuration.

  • dynamic_eligibility – A dictionary specifying the eligibility rules to be applied when selecting offers for the online learning configuration.

  • replace – A boolean indicating whether the online learning configuration should be replaced if it already exists.

  • update – A boolean indicating whether the online learning configuration should be updated if it already exists.

  • create_options_index – A boolean indicating whether an index should be created on the options store collection. This index greatly improves responses times.

  • create_covering_index – A boolean indicating whether a covering index should be created on the options store collection. A covered index greatly improves responses times but does not make all fields in the options store available in the post scoring logic.

Returns:

The UUID identifier for the online learning configuration which should be linked to the deployment for the project.

prediction.apis.online_learning_management.online_learning_ecosystem_rewards_setup_feature_store(auth, offer_db, offer_collection, offer_name_column, contextual_variables, setup_feature_store_db, setup_feature_store_collection)#

Add contextual variables to a setup feature store for the ecosystem rewards dynamic recommender using a collection containing the relevant offers.

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using the jwt_access package.

  • offer_db – The database containing the offers.

  • offer_collection – The collection containing the offers.

  • offer_name_column – The column in the collection containing the offer names.

  • contextual_variables – A dictionary containing the contextual variables names as keys. Each value in the dictionary should be a list containing the possible values of the contextual variables.

  • setup_feature_store_db – The database to store the setup feature store in.

  • setup_feature_store_collection – The collection to store the setup feature store in.

prediction.apis.prediction_engine module#

prediction.apis.prediction_engine.add_pretrained_model(auth, model_name, info=False)#

Add a pretrained h2o model to the ecosystem-server. The model zip file should be located in a models folder.

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using jwt_access.

  • model_name – The name of the model to be added to the ecosystem-server.

prediction.apis.prediction_engine.delete_analysis(auth, analysis_id, info=False)#
prediction.apis.prediction_engine.delete_model(auth, model_id, info=False)#
prediction.apis.prediction_engine.delete_prediction(auth, prediction_id, info=False)#
prediction.apis.prediction_engine.delete_prediction_project(auth, prediction_project_id, info=False)#
prediction.apis.prediction_engine.delete_user_deployments(auth, user_deployments_id, info=False)#
prediction.apis.prediction_engine.delete_userframe(auth, frame_id, info=False)#
prediction.apis.prediction_engine.deploy_predictor(auth, json, info=False)#
prediction.apis.prediction_engine.download_project(auth, project_id, module_name, info=False)#

Create a module zip file from the project specified by project_id.

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using jwt_access.

  • project_id – The name of the project to be converted to a module.

  • module_name – The name of the module specified in the project config.

prediction.apis.prediction_engine.get_analysis_result(auth, analysis_id, info=False)#
prediction.apis.prediction_engine.get_analysis_results(auth, info=False)#
prediction.apis.prediction_engine.get_featurestore(auth, frame_id, info=False)#
prediction.apis.prediction_engine.get_featurestores(auth, info=False)#
prediction.apis.prediction_engine.get_prediction(auth, predict_id, info=False)#
prediction.apis.prediction_engine.get_prediction_project(auth, project_id, info=False)#
prediction.apis.prediction_engine.get_prediction_projects(auth, info=False)#
prediction.apis.prediction_engine.get_prediction_projects_base(auth, info=False)#
prediction.apis.prediction_engine.get_uframe(auth, frame_id, info=False)#
prediction.apis.prediction_engine.get_user_deployment(auth, user_deployments, info=False)#
prediction.apis.prediction_engine.get_user_deployments(auth, info=False)#
prediction.apis.prediction_engine.get_user_featurestores(auth, info=False)#
prediction.apis.prediction_engine.get_user_files(auth, info=False)#
prediction.apis.prediction_engine.get_user_model(auth, model_identity, info=False)#
prediction.apis.prediction_engine.get_user_models(auth, info=False)#
prediction.apis.prediction_engine.get_user_predictions(auth, info=False)#
prediction.apis.prediction_engine.import_module(auth, module_id, info=False)#

Import a module to the ecosystem-server.

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using jwt_access.

  • module_id – The name of the module to be imported.

prediction.apis.prediction_engine.save_analysis(auth, analysis, info=False)#
prediction.apis.prediction_engine.save_model(auth, model, info=False)#
prediction.apis.prediction_engine.save_prediction(auth, prediction, info=False)#
prediction.apis.prediction_engine.save_prediction_project(auth, prediction_project, info=False)#
prediction.apis.prediction_engine.save_user_deployments(auth, user_deployments, info=False)#
prediction.apis.prediction_engine.save_user_frame(auth, user_frame, info=False)#
prediction.apis.prediction_engine.test_model(auth, value, info=False)#

prediction.apis.quickflat module#

class prediction.apis.quickflat.QuickFlat(config)#

Bases: object

flatten()#

prediction.apis.settings_controller module#

prediction.apis.settings_controller.current(auth, info=False)#

prediction.apis.transaction_categorization module#

prediction.apis.transaction_categorization.get_transactions(auth, json, info=False)#
prediction.apis.transaction_categorization.get_transactions_cat_predicted(auth, json, info=False)#
prediction.apis.transaction_categorization.get_transactions_processed(auth, count, info=False)#

prediction.apis.user_controller module#

prediction.apis.user_controller.get_current_user(auth, info=False)#
prediction.apis.user_controller.post_users(auth, json, info=False)#
prediction.apis.user_controller.users(auth, filterByage, filterBycity, filterByemail, filterByfirstName, filterBylastName, filterBylogin, filterBystreet, filterByzipcode, orderBy, pageNumber, pageSize, sortBy, info=False)#

prediction.apis.utilities module#

prediction.apis.utilities.convert_json_to_yaml(auth, yaml, info=False)#
prediction.apis.utilities.convert_range_from_to(auth, rules, value, info=False)#
prediction.apis.utilities.convert_text_file_from_to(auth, in_delimiter, in_file, out_delimiter, out_file, rules, info=False)#
prediction.apis.utilities.copy_file(auth, f_from, f_to, info=False)#
prediction.apis.utilities.create_json_from_text(auth, in_file, out_file, info=False)#
prediction.apis.utilities.execute_generic(auth, script, info=False)#
prediction.apis.utilities.get_config(auth, info=False)#
prediction.apis.utilities.get_container_log(auth, lines, type, info=False)#
prediction.apis.utilities.get_file(auth, file_name, lines, info=False)#
prediction.apis.utilities.get_rest_generic(auth, data, info=False)#

prediction.apis.worker_aws module#

prediction.apis.worker_aws.create_tenant(auth, tenant, info=False)#
prediction.apis.worker_aws.delete_tenant(auth, tenant, action, info=False)#
prediction.apis.worker_aws.get_status(auth, tenant, info=False)#
prediction.apis.worker_aws.list_tenants(auth, info=False)#
prediction.apis.worker_aws.manage_tenant(auth, tenant, action, info=False)#
prediction.apis.worker_aws.post_create_tenant(auth, data, info=False)#
prediction.apis.worker_aws.post_set_credentials(auth, data, info=False)#
prediction.apis.worker_aws.set_configuration(auth, conf_string, info=False)#
prediction.apis.worker_aws.set_credentials(auth, cred_string, info=False)#
prediction.apis.worker_aws.set_tenant(auth, tenant_string, info=False)#
prediction.apis.worker_aws.validate_tenant(auth, tenant, info=False)#

prediction.apis.worker_file_service module#

prediction.apis.worker_file_service.convert_csv_from_to(auth, char, infile, outfile, info=False)#
prediction.apis.worker_file_service.convert_remove_non_printables(auth, infile, outfile, info=False)#
prediction.apis.worker_file_service.convert_text_file_from_to(auth, in_delimiter, infile, out_delimiter, outfile, rules, info=False)#
prediction.apis.worker_file_service.copy_file(auth, from_path, to_path, user='', info=False)#
prediction.apis.worker_file_service.delete_file(auth, path, user='', info=False)#
prediction.apis.worker_file_service.download(auth, target_path, download_path, info=False)#

Download a file from the ecosystem-server

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using jwt_access.

  • target_path – The path of the file to be downloaded.

  • download_path – The path where the downloaded file will be saved.

prediction.apis.worker_file_service.file_delete(auth, path, info=False)#
prediction.apis.worker_file_service.get_file(auth, path, file_path, lines, info=False)#
prediction.apis.worker_file_service.get_file_tail(auth, path, file_path, lines, info=False)#
prediction.apis.worker_file_service.get_files(auth, path='./', user='', info=False)#
prediction.apis.worker_file_service.get_property(auth, property_key, info=False)#
prediction.apis.worker_file_service.process_push(auth, json, info=False)#
prediction.apis.worker_file_service.update_properties(auth, properties, info=False)#
prediction.apis.worker_file_service.upload_file(auth, path, target_path, info=False)#

Upload a file to the ecosystem-server

Parameters:
  • auth – Token for accessing the ecosystem-server. Created using jwt_access.

  • path – The path of the file to be uploaded.

  • target_path – The path where the file will be uploaded.

prediction.apis.worker_google module#

prediction.apis.worker_google.get_security(auth, path, info=False)#
prediction.apis.worker_google.get_sentiment(auth, string, info=False)#

prediction.apis.worker_h2o module#

prediction.apis.worker_h2o.cancel_job(auth, jobid, info=False)#
prediction.apis.worker_h2o.change_to_enum(auth, frame, column, info=False)#
prediction.apis.worker_h2o.delete_frame(auth, frame, info=False)#
prediction.apis.worker_h2o.download_model(auth, mojo_id, predict_id, info=False)#
prediction.apis.worker_h2o.download_model_mojo(auth, modelid, info=False)#
prediction.apis.worker_h2o.export_frame(auth, frame, info=False)#
prediction.apis.worker_h2o.featurestore_to_frame(auth, userframe, info=False)#
prediction.apis.worker_h2o.file_to_frame(auth, file_name, first_row_column_names, separator, info=False)#
prediction.apis.worker_h2o.generate_model_detail(auth, info=False)#

Process the models saved to the ecosystem-server and generate the model details for display on the ecosystem-workbench.

Parameters:

auth – Token for accessing the ecosystem-server. Created using jwt_access.

prediction.apis.worker_h2o.get_frame(auth, frameid, info=False)#
prediction.apis.worker_h2o.get_frame_column_summary(auth, frame, column, info=False)#
prediction.apis.worker_h2o.get_frame_columns(auth, frameid, info=False)#
prediction.apis.worker_h2o.get_model_stats(auth, modelid, source, statstype, info=False)#
prediction.apis.worker_h2o.get_train_model(auth, modelid, modeltype, info=False)#
prediction.apis.worker_h2o.import_sql_table(auth, json, info=False)#
prediction.apis.worker_h2o.model_grids(auth, model, info=False)#
prediction.apis.worker_h2o.prediction_frames(auth, info=False)#
prediction.apis.worker_h2o.prediction_jobs(auth, info=False)#
prediction.apis.worker_h2o.prediction_models(auth, info=False)#
prediction.apis.worker_h2o.split_frame(auth, frame, ratio, info=False)#
prediction.apis.worker_h2o.train_model(auth, modelid, modeltype, params, info=False)#

prediction.apis.worker_microsoft module#

prediction.apis.worker_microsoft.get_anomaly(auth, string, info=False)#

prediction.apis.worker_open_ai module#

prediction.apis.worker_open_ai.get_open_ai_result(auth, json, info=False)#

prediction.apis.worker_uber module#

prediction.apis.worker_uber.build_model_ludwig(auth, model_name, model_definition, info=False)#
prediction.apis.worker_uber.build_orbit_btvc(auth, input_database, input_collection, output_database, output_collection, response_column, date_column, seasonality, seed, find, params, info=False)#
prediction.apis.worker_uber.build_orbit_dlt(auth, input_database, input_collection, output_database, output_collection, response_column, date_column, seasonality, seed, find, params, info=False)#
prediction.apis.worker_uber.build_orbit_ets(auth, input_database, input_collection, output_database, output_collection, response_column, date_column, seasonality, seed, find, info=False)#
prediction.apis.worker_uber.build_orbit_lgt(auth, input_database, input_collection, output_database, output_collection, response_column, date_column, seasonality, seed, find, info=False)#

Module contents#