usergrid-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mru...@apache.org
Subject [23/50] [abbrv] usergrid git commit: Initial checkin for Python Utilities and SDK
Date Mon, 01 Aug 2016 16:53:58 GMT
http://git-wip-us.apache.org/repos/asf/usergrid/blob/32f9e55d/utils/usergrid-util-python/usergrid_tools/parse_importer/README.md
----------------------------------------------------------------------
diff --git a/utils/usergrid-util-python/usergrid_tools/parse_importer/README.md b/utils/usergrid-util-python/usergrid_tools/parse_importer/README.md
new file mode 100644
index 0000000..3f75025
--- /dev/null
+++ b/utils/usergrid-util-python/usergrid_tools/parse_importer/README.md
@@ -0,0 +1,90 @@
+# Data Importer for Parse.com Application Data Export
+
+## Overview
+
+This Python script uses the Usergrid Python SDK to iterate a data export from Parse.com to
import it into a Usergrid instance
+
+## Usage
+
+```
+usage: parse_data_importer.py [-h] -o ORG -a APP --url URL -f FILE --tmp_dir
+                              TMP_DIR [--client_id CLIENT_ID]
+                              [--client_secret CLIENT_SECRET]
+
+Parse.com Data Importer for Usergrid
+
+optional arguments:
+  -h, --help            show this help message and exit
+  -o ORG, --org ORG     Name of the org to import data into
+  -a APP, --app APP     Name of the app to import data into
+  --url URL             The URL of the Usergrid Instance to import data into
+  -f FILE, --file FILE  Full or relative path of the data file to import
+  --tmp_dir TMP_DIR     Directory where data file will be unzipped
+  --client_id CLIENT_ID
+                        The Client ID for using OAuth Tokens - necessary if
+                        app is secured
+  --client_secret CLIENT_SECRET
+                        The Client Secret for using OAuth Tokens - necessary
+                        if app is secured
+```
+
+## Features
+
+Support for:
+* Roles -> Users
+* Roles -> Roles
+* Custom entities
+* Joins implemented as Graph Edges with the name of 'joins' - in both directions
+* Pointers implemented as Graph Edges with the name of 'pointers' - in both directions on
an object
+
+No Support for:
+* Products - In-App Purchases
+* Installations - Will map to 'Devices' at some point - important for Push Notifications
perhaps
+* Binary Assets (Images) - Work in Progress to complete
+
+## Graph Edges in Usergrid
+
+Usergrid is a Graph Datastore and implements the concept of a Graph Edge in the form of a
'connection'.  Pointers, when found on an object, are implemented as follows:
+
+Source Entity --[Edge Name]--> Target Entity
+
+This is represented as a URL as follows: /{source_collection}/{source_entity_id}/pointers/{optional:target_type}.
 A GET on this URL would return a list of entities which have this graph edge.  If a `{target_type}`
is specified the results will be limited to entities of that type. 
+
+Examples: 
+* `GET /pets/max/pointers` - get the list of entities of all entity types which have a 'pointers'
edge to them from the 'pet' 'max'
+* `GET /pets/max/pointers/owners` - get the list of entities of owners which have a 'pointers'
edge to them from the 'pet' 'max'
+* `GET /pets/max/pointers/owners/jeff` - get the owner 'jeff' which has a 'pointers' edge
to them from the 'pet' 'max'
+
+## Pointers
+
+Parse.com has support for pointers from one object to another.  For example, for a Pointer
from a Pet to an Owner, the object might look as follows:
+ 
+```
+{
+  "fields" : "...",
+  "objectId": "A7Hdad8HD3",
+  "owner": {
+      "__type": "Pointer",
+      "className": "Owner",
+      "objectId": "QC41NHJJlU"
+  }
+}
+```
+
+
+## Joins
+Parse.com has support for the concept of a Join as well.  At the moment, Joining Users and
Roles is supported and an attempt has been made to support arbitrary Joins based on the format
of the `_Join:users:_Role.json` file found in my exported data.  The from/to types appear
to be found in the filename.
+
+An example of the Join file is below:
+
+```
+{ "results": [
+	{
+        "owningId": "lxhMWzbeXa",
+        "relatedId": "MCU2Cv9nuk"
+    }
+] }
+```
+
+
+Joins are implemented as Graph Edges with the name of 'joins' - in both directions from the
objects where the Join was found

http://git-wip-us.apache.org/repos/asf/usergrid/blob/32f9e55d/utils/usergrid-util-python/usergrid_tools/parse_importer/__init__.py
----------------------------------------------------------------------
diff --git a/utils/usergrid-util-python/usergrid_tools/parse_importer/__init__.py b/utils/usergrid-util-python/usergrid_tools/parse_importer/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/usergrid/blob/32f9e55d/utils/usergrid-util-python/usergrid_tools/parse_importer/parse_importer.py
----------------------------------------------------------------------
diff --git a/utils/usergrid-util-python/usergrid_tools/parse_importer/parse_importer.py b/utils/usergrid-util-python/usergrid_tools/parse_importer/parse_importer.py
new file mode 100644
index 0000000..3a2d864
--- /dev/null
+++ b/utils/usergrid-util-python/usergrid_tools/parse_importer/parse_importer.py
@@ -0,0 +1,385 @@
+import json
+import logging
+from logging.handlers import RotatingFileHandler
+import os
+from os import listdir
+import zipfile
+from os.path import isfile
+import sys
+import argparse
+import traceback
+
+from usergrid import Usergrid
+from usergrid.UsergridClient import UsergridEntity
+
+__author__ = 'Jeff West @ ApigeeCorporation'
+
+logger = logging.getLogger('UsergridParseImporter')
+
+parse_id_to_uuid_map = {}
+global_connections = {}
+config = {}
+
+
+def init_logging(stdout_enabled=True):
+    root_logger = logging.getLogger()
+    log_file_name = './usergrid_parse_importer.log'
+    log_formatter = logging.Formatter(fmt='%(asctime)s | %(name)s | %(processName)s | %(levelname)s
| %(message)s',
+                                      datefmt='%m/%d/%Y %I:%M:%S %p')
+
+    rotating_file = logging.handlers.RotatingFileHandler(filename=log_file_name,
+                                                         mode='a',
+                                                         maxBytes=2048576000,
+                                                         backupCount=10)
+    rotating_file.setFormatter(log_formatter)
+    rotating_file.setLevel(logging.INFO)
+
+    root_logger.addHandler(rotating_file)
+    root_logger.setLevel(logging.INFO)
+
+    logging.getLogger('urllib3.connectionpool').setLevel(logging.WARN)
+    logging.getLogger('requests.packages.urllib3.connectionpool').setLevel(logging.WARN)
+
+    if stdout_enabled:
+        stdout_logger = logging.StreamHandler(sys.stdout)
+        stdout_logger.setFormatter(log_formatter)
+        stdout_logger.setLevel(logging.INFO)
+        root_logger.addHandler(stdout_logger)
+
+
+def convert_parse_entity(collection, parse_entity):
+    parse_entity['type'] = collection
+
+    if 'name' not in parse_entity and collection.lower() != 'users':
+        parse_entity['name'] = parse_entity['objectId']
+
+    connections = {}
+
+    for name, value in parse_entity.iteritems():
+        if isinstance(value, dict):
+            if value.get('__type') == 'Pointer':
+                class_name = value.get('className') if value.get('className')[0] != '_' else
value.get('className')[1:]
+                connections[value.get('objectId')] = class_name
+
+                logger.info('Connection found from [%s: %s] to entity [%s: %s]' % (
+                    collection, parse_entity['name'], class_name, value.get('objectId')))
+
+    return UsergridEntity(parse_entity), connections
+
+
+def build_usergrid_entity(collection, entity_uuid, data=None):
+    identifier = {'type': collection, 'uuid': entity_uuid}
+    data = {} if data is None else data
+    data.update(identifier)
+    return UsergridEntity(data)
+
+
+def load_users_and_roles(working_directory):
+    with open(os.path.join(working_directory, '_User.json'), 'r') as f:
+        users = json.load(f).get('results', [])
+        logger.info('Loaded [%s] Users' % len(users))
+
+    for i, parse_user in enumerate(users):
+        logger.info('Loading user [%s]: [%s / %s]' % (i, parse_user['username'], parse_user['objectId']))
+        usergrid_user, connections = convert_parse_entity('users', parse_user)
+        res = usergrid_user.save()
+
+        if res.ok:
+            logger.info('Saved user [%s]: [%s / %s]' % (i, parse_user['username'], parse_user['objectId']))
+
+            if 'uuid' in usergrid_user.entity_data:
+                parse_id_to_uuid_map[parse_user['objectId']] = usergrid_user.get('uuid')
+        else:
+            logger.error(
+                    'Error saving user [%s]: [%s / %s] - %s' % (i, parse_user['username'],
parse_user['objectId'], res))
+
+    with open(os.path.join(working_directory, '_Role.json'), 'r') as f:
+        roles = json.load(f).get('results', [])
+        logger.info('Loaded [%s] Roles' % len(roles))
+
+    for i, parse_role in enumerate(roles):
+        logger.info('Loading role [%s]: [%s / %s]' % (i, parse_role['name'], parse_role['objectId']))
+        usergrid_role, connections = convert_parse_entity('roles', parse_role)
+        res = usergrid_role.save()
+
+        if res.ok:
+            logger.info('Saved role [%s]: [%s / %s]' % (i, parse_role['name'], parse_role['objectId']))
+
+            if 'uuid' in usergrid_role.entity_data:
+                parse_id_to_uuid_map[parse_role['objectId']] = usergrid_role.get('uuid')
+
+        else:
+            logger.error(
+                    'Error saving role [%s]: [%s / %s] - %s' % (i, parse_role['name'], parse_role['objectId'],
res))
+
+    join_file = os.path.join(working_directory, '_Join:users:_Role.json')
+
+    if os.path.isfile(join_file) and os.path.getsize(join_file) > 0:
+        with open(join_file, 'r') as f:
+            users_to_roles = json.load(f).get('results', [])
+            logger.info('Loaded [%s] User->Roles' % len(users_to_roles))
+
+            for user_to_role in users_to_roles:
+                role_id = user_to_role['owningId']
+                role_uuid = parse_id_to_uuid_map.get(role_id)
+
+                target_role_id = user_to_role['relatedId']
+                target_role_uuid = parse_id_to_uuid_map.get(target_role_id)
+
+                if role_uuid is None or target_role_uuid is None:
+                    logger.error('Failed on assigning role [%s] to user [%s]' % (role_uuid,
target_role_uuid))
+                    continue
+
+                target_role_entity = build_usergrid_entity('user', target_role_uuid)
+
+                res = Usergrid.assign_role(role_uuid, target_role_entity)
+
+                if res.ok:
+                    logger.info('Assigned role [%s] to user [%s]' % (role_uuid, target_role_uuid))
+                else:
+                    logger.error('Failed on assigning role [%s] to user [%s]' % (role_uuid,
target_role_uuid))
+
+    else:
+        logger.info('No Users -> Roles to load')
+
+    join_file = os.path.join(working_directory, '_Join:roles:_Role.json')
+
+    if os.path.isfile(join_file) and os.path.getsize(join_file) > 0:
+        with open(join_file, 'r') as f:
+            users_to_roles = json.load(f).get('results', [])
+            logger.info('Loaded [%s] Roles->Roles' % len(users_to_roles))
+
+            for user_to_role in users_to_roles:
+                role_id = user_to_role['owningId']
+                role_uuid = parse_id_to_uuid_map.get(role_id)
+
+                target_role_id = user_to_role['relatedId']
+                target_role_uuid = parse_id_to_uuid_map.get(target_role_id)
+
+                if role_uuid is None or target_role_uuid is None:
+                    logger.error('Failed on assigning role [%s] to role [%s]' % (role_uuid,
target_role_uuid))
+                    continue
+
+                target_role_entity = build_usergrid_entity('role', target_role_uuid)
+
+                res = Usergrid.assign_role(role_uuid, target_role_entity)
+
+                if res.ok:
+                    logger.info('Assigned role [%s] to role [%s]' % (role_uuid, target_role_uuid))
+                else:
+                    logger.error('Failed on assigning role [%s] to role [%s]' % (role_uuid,
target_role_uuid))
+
+    else:
+        logger.info('No Roles -> Roles to load')
+
+
+def process_join_file(working_directory, join_file):
+    file_path = os.path.join(working_directory, join_file)
+
+    logger.warn('Processing Join file: %s' % file_path)
+
+    parts = join_file.split(':')
+
+    if len(parts) != 3:
+        logger.warn('Did not find expected 3 parts in JOIN filename: %s' % join_file)
+        return
+
+    related_type = parts[1]
+    owning_type = parts[2].split('.')[0]
+
+    owning_type = owning_type[1:] if owning_type[0] == '_' else owning_type
+
+    with open(file_path, 'r') as f:
+        try:
+            json_data = json.load(f)
+
+        except ValueError, e:
+            print traceback.format_exc(e)
+            logger.error('Unable to process file: %s' % file_path)
+            return
+
+        entities = json_data.get('results')
+
+        for join in entities:
+            owning_id = join.get('owningId')
+            related_id = join.get('relatedId')
+
+            owning_entity = build_usergrid_entity(owning_type, parse_id_to_uuid_map.get(owning_id))
+            related_entity = build_usergrid_entity(related_type, parse_id_to_uuid_map.get(related_id))
+
+            connect_entities(owning_entity, related_entity, 'joins')
+            connect_entities(related_entity, owning_entity, 'joins')
+
+
+def load_entities(working_directory):
+    files = [
+        f for f in listdir(working_directory)
+
+        if isfile(os.path.join(working_directory, f))
+        and os.path.getsize(os.path.join(working_directory, f)) > 0
+        and f not in ['_Join:roles:_Role.json',
+                      '_Join:users:_Role.json',
+                      '_User.json',
+                      '_Product.json',
+                      '_Installation.json',
+                      '_Role.json']
+        ]
+
+    # sort to put join files last...
+    for data_file in sorted(files):
+        if data_file[0:6] == '_Join:':
+            process_join_file(working_directory, data_file)
+            continue
+
+        file_path = os.path.join(working_directory, data_file)
+        collection = data_file.split('.')[0]
+
+        if collection[0] == '_':
+            logger.warn('Found internal type: [%s]' % collection)
+            collection = collection[1:]
+
+        if collection not in global_connections:
+            global_connections[collection] = {}
+
+        with open(file_path, 'r') as f:
+
+            try:
+                json_data = json.load(f)
+
+            except ValueError, e:
+                print traceback.format_exc(e)
+                logger.error('Unable to process file: %s' % file_path)
+                continue
+
+            entities = json_data.get('results')
+
+            logger.info('Found [%s] entities of type [%s]' % (len(entities), collection))
+
+            for parse_entity in entities:
+                usergrid_entity, connections = convert_parse_entity(collection, parse_entity)
+                response = usergrid_entity.save()
+
+                global_connections[collection][usergrid_entity.get('uuid')] = connections
+
+                if response.ok:
+                    logger.info('Saved Entity: %s' % parse_entity)
+                else:
+                    logger.info('Error saving entity %s: %s' % (parse_entity, response))
+
+
+def connect_entities(from_entity, to_entity, connection_name):
+    connect_response = from_entity.connect(connection_name, to_entity)
+
+    if connect_response.ok:
+        logger.info('Successfully connected [%s / %s]--[%s]-->[%s / %s]' % (
+            from_entity.get('type'), from_entity.get('uuid'), connection_name, to_entity.get('type'),
+            to_entity.get('uuid')))
+    else:
+        logger.error('Unable to connect [%s / %s]--[%s]-->[%s / %s]: %s' % (
+            from_entity.get('type'), from_entity.get('uuid'), connection_name, to_entity.get('type'),
+            to_entity.get('uuid'), connect_response))
+
+
+def create_connections():
+    for from_collection, entity_map in global_connections.iteritems():
+
+        for from_entity_uuid, entity_connections in entity_map.iteritems():
+            from_entity = build_usergrid_entity(from_collection, from_entity_uuid)
+
+            for to_entity_id, to_entity_collection in entity_connections.iteritems():
+                to_entity = build_usergrid_entity(to_entity_collection, parse_id_to_uuid_map.get(to_entity_id))
+
+                connect_entities(from_entity, to_entity, 'pointers')
+                connect_entities(to_entity, from_entity, 'pointers')
+
+
+def parse_args():
+    parser = argparse.ArgumentParser(description='Parse.com Data Importer for Usergrid')
+
+    parser.add_argument('-o', '--org',
+                        help='Name of the Usergrid Org to import data into - must already
exist',
+                        type=str,
+                        required=True)
+
+    parser.add_argument('-a', '--app',
+                        help='Name of the Usergrid Application to import data into - must
already exist',
+                        type=str,
+                        required=True)
+
+    parser.add_argument('--url',
+                        help='The URL of the Usergrid Instance',
+                        type=str,
+                        required=True)
+
+    parser.add_argument('-f', '--file',
+                        help='Full or relative path of the data file to import',
+                        required=True,
+                        type=str)
+
+    parser.add_argument('--tmp_dir',
+                        help='Directory where data file will be unzipped',
+                        required=True,
+                        type=str)
+
+    parser.add_argument('--client_id',
+                        help='The Client ID for using OAuth Tokens - necessary if app is
secured',
+                        required=False,
+                        type=str)
+
+    parser.add_argument('--client_secret',
+                        help='The Client Secret for using OAuth Tokens - necessary if app
is secured',
+                        required=False,
+                        type=str)
+
+    my_args = parser.parse_args(sys.argv[1:])
+
+    return vars(my_args)
+
+
+def main():
+    global config
+    config = parse_args()
+
+    init_logging()
+
+    Usergrid.init(org_id=config.get('org'),
+                  app_id=config.get('app'),
+                  base_url=config.get('url'),
+                  client_id=config.get('client_id'),
+                  client_secret=config.get('client_secret'))
+
+    tmp_dir = config.get('tmp_dir')
+    file_path = config.get('file')
+
+    if not os.path.isfile(file_path):
+        logger.critical('File path specified [%s] is not a file!' % file_path)
+        logger.critical('Unable to continue')
+        exit(1)
+
+    if not os.path.isdir(tmp_dir):
+        logger.critical('Temp Directory path specified [%s] is not a directory!' % tmp_dir)
+        logger.critical('Unable to continue')
+        exit(1)
+
+    file_name = os.path.basename(file_path).split('.')[0]
+    working_directory = os.path.join(tmp_dir, file_name)
+
+    try:
+        with zipfile.ZipFile(file_path, 'r') as z:
+            logger.warn('Extracting files to directory: %s' % working_directory)
+            z.extractall(working_directory)
+            logger.warn('Extraction complete')
+
+    except Exception, e:
+        logger.critical(traceback.format_exc(e))
+        logger.critical('Extraction failed')
+        logger.critical('Unable to continue')
+        exit(1)
+
+    load_users_and_roles(working_directory)
+    load_entities(working_directory)
+    create_connections()
+
+
+if __name__ == '__main__':
+    main()

http://git-wip-us.apache.org/repos/asf/usergrid/blob/32f9e55d/utils/usergrid-util-python/usergrid_tools/permissions/README.md
----------------------------------------------------------------------
diff --git a/utils/usergrid-util-python/usergrid_tools/permissions/README.md b/utils/usergrid-util-python/usergrid_tools/permissions/README.md
new file mode 100644
index 0000000..8398de6
--- /dev/null
+++ b/utils/usergrid-util-python/usergrid_tools/permissions/README.md
@@ -0,0 +1,3 @@
+# Usergrid Tools (in Python)
+
+THIS WAS USED TO SET THE PERMISISONS for /devices because it is different from /device in
Jersey 2.0 and how we use Shiro
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/usergrid/blob/32f9e55d/utils/usergrid-util-python/usergrid_tools/permissions/permissions.py
----------------------------------------------------------------------
diff --git a/utils/usergrid-util-python/usergrid_tools/permissions/permissions.py b/utils/usergrid-util-python/usergrid_tools/permissions/permissions.py
new file mode 100644
index 0000000..e859843
--- /dev/null
+++ b/utils/usergrid-util-python/usergrid_tools/permissions/permissions.py
@@ -0,0 +1,146 @@
+import json
+from multiprocessing import Pool
+
+import requests
+
+# URL Templates for Usergrid
+import time
+
+org_management_app_url_template = "{api_url}/management/organizations/{org}/applications?client_id={client_id}&client_secret={client_secret}"
+org_management_url_template = "{api_url}/management/organizations/{org}/applications?client_id={client_id}&client_secret={client_secret}"
+org_url_template = "{api_url}/{org}?client_id={client_id}&client_secret={client_secret}"
+app_url_template = "{api_url}/{org}/{app}?client_id={client_id}&client_secret={client_secret}"
+collection_url_template = "{api_url}/{org}/{app}/{collection}?client_id={client_id}&client_secret={client_secret}"
+collection_query_url_template = "{api_url}/{org}/{app}/{collection}?ql={ql}&client_id={client_id}&client_secret={client_secret}&limit={limit}"
+collection_graph_url_template = "{api_url}/{org}/{app}/{collection}?client_id={client_id}&client_secret={client_secret}&limit={limit}"
+connection_query_url_template = "{api_url}/{org}/{app}/{collection}/{uuid}/{verb}?client_id={client_id}&client_secret={client_secret}"
+connecting_query_url_template = "{api_url}/{org}/{app}/{collection}/{uuid}/connecting/{verb}?client_id={client_id}&client_secret={client_secret}"
+connection_create_by_uuid_url_template = "{api_url}/{org}/{app}/{collection}/{uuid}/{verb}/{target_uuid}?client_id={client_id}&client_secret={client_secret}"
+connection_create_by_name_url_template = "{api_url}/{org}/{app}/{collection}/{uuid}/{verb}/{target_type}/{target_name}?client_id={client_id}&client_secret={client_secret}"
+get_entity_url_template = "{api_url}/{org}/{app}/{collection}/{uuid}?client_id={client_id}&client_secret={client_secret}&connections=none"
+get_entity_url_with_connections_template = "{api_url}/{org}/{app}/{collection}/{uuid}?client_id={client_id}&client_secret={client_secret}"
+put_entity_url_template = "{api_url}/{org}/{app}/{collection}/{uuid}?client_id={client_id}&client_secret={client_secret}"
+permissions_url_template = "{api_url}/{org}/{app}/{collection}/{uuid}/permissions?client_id={client_id}&client_secret={client_secret}"
+
+user_credentials_url_template = "{api_url}/{org}/{app}/users/{uuid}/credentials"
+
+org = 'myOrg'
+
+config = {
+    "endpoint": {
+        "api_url": "https://host",
+    },
+    "credentials": {
+        "myOrg": {
+            "client_id": "foo-zw",
+            "client_secret": "bar"
+        }
+    }
+}
+
+api_url = config.get('endpoint').get('api_url')
+
+all_creds = config.get('credentials')
+
+creds = config.get('credentials').get(org)
+
+
+def post(**kwargs):
+    # print kwargs
+    # print "curl -X POST \"%s\" -d '%s'" % (kwargs.get('url'), kwargs.get('data'))
+    return requests.post(**kwargs)
+
+
+def build_role(name, title):
+    role = {
+        'name': name,
+        'roleName': name,
+        'inactivity': 0,
+        'title': title
+    }
+
+    return role
+
+
+def set_default_role(app):
+    print app
+    role_name = 'guest'
+    role = build_role('guest', 'Guest')
+
+    # # put default role
+    role_url = put_entity_url_template.format(org=org,
+                                              app=app,
+                                              uuid=role_name,
+                                              collection='roles',
+                                              api_url=api_url,
+                                              **creds)
+    print 'DELETE ' + role_url
+
+    # # r = requests.delete(role_url)
+    # #
+    # # if r.status_code != 200:
+    # #     print 'ERROR ON DELETE'
+    # #     print r.text
+    # #
+    # # time.sleep(3)
+    #
+    # # # put default role
+    # role_collection_url = collection_url_template.format(org=org,
+    #                                                      app=app,
+    #                                                      collection='roles',
+    #                                                      api_url=api_url,
+    #                                                      **creds)
+    # print 'POST ' + role_collection_url
+    #
+    # r = post(url=role_collection_url, data=json.dumps(role))
+    #
+    # if r.status_code != 200:
+    #     print r.text
+
+    permissions_url = permissions_url_template.format(org=org,
+                                                      limit=1000,
+                                                      app=app,
+                                                      collection='roles',
+                                                      uuid=role_name,
+                                                      api_url=api_url,
+                                                      **creds)
+
+    r = post(url=permissions_url, data=json.dumps({'permission': 'post:/users'}))
+
+    r = post(url=permissions_url, data=json.dumps({'permission': 'put:/devices/*'}))
+    r = post(url=permissions_url, data=json.dumps({'permission': 'put,post:/devices'}))
+
+    r = post(url=permissions_url, data=json.dumps({'permission': 'put:/device/*'}))
+    r = post(url=permissions_url, data=json.dumps({'permission': 'put,post:/device'}))
+
+    if r.status_code != 200:
+        print r.text
+
+
+def list_apps():
+    apps = []
+    source_org_mgmt_url = org_management_url_template.format(org=org,
+                                                             limit=1000,
+                                                             api_url=api_url,
+                                                             **creds)
+
+    r = requests.get(source_org_mgmt_url)
+
+    print r.text
+
+    data = r.json().get('data')
+
+    for app_uuid in data:
+
+        if 'self-care' in app_uuid:
+            parts = app_uuid.split('/')
+            apps.append(parts[1])
+
+    return apps
+
+
+apps = list_apps()
+
+pool = Pool(12)
+
+pool.map(set_default_role, apps)

http://git-wip-us.apache.org/repos/asf/usergrid/blob/32f9e55d/utils/usergrid-util-python/usergrid_tools/queue/README.md
----------------------------------------------------------------------
diff --git a/utils/usergrid-util-python/usergrid_tools/queue/README.md b/utils/usergrid-util-python/usergrid_tools/queue/README.md
new file mode 100644
index 0000000..fee63f7
--- /dev/null
+++ b/utils/usergrid-util-python/usergrid_tools/queue/README.md
@@ -0,0 +1 @@
+dlq_requeue - used for taking messages out of one queue and putting them in another.  useful
for reading Deadletter messages and reprocessing them.  You could also add filtering to the
logic if you wanted.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/usergrid/blob/32f9e55d/utils/usergrid-util-python/usergrid_tools/queue/dlq-iterator-checker.py
----------------------------------------------------------------------
diff --git a/utils/usergrid-util-python/usergrid_tools/queue/dlq-iterator-checker.py b/utils/usergrid-util-python/usergrid_tools/queue/dlq-iterator-checker.py
new file mode 100644
index 0000000..9f31e62
--- /dev/null
+++ b/utils/usergrid-util-python/usergrid_tools/queue/dlq-iterator-checker.py
@@ -0,0 +1,143 @@
+from multiprocessing.pool import Pool
+import argparse
+import json
+import datetime
+import os
+import time
+import sys
+
+import boto
+from boto import sqs
+import requests
+
+__author__ = 'Jeff West @ ApigeeCorporation'
+
+sqs_conn = None
+sqs_queue = None
+
+# THIS WAS USED TO TAKE MESSAGES OUT OF THE DEAD LETTER AND TEST WHETHER THEY EXISTED OR
NOT
+
+def total_seconds(td):
+    return (td.microseconds + (td.seconds + td.days * 24.0 * 3600) * 10.0 ** 6) / 10.0 **
6
+
+
+def total_milliseconds(td):
+    return (td.microseconds + td.seconds * 1000000) / 1000
+
+
+def get_time_remaining(count, rate):
+    if rate == 0:
+        return 'NaN'
+
+    seconds = count * 1.0 / rate
+
+    m, s = divmod(seconds, 60)
+    h, m = divmod(m, 60)
+
+    return "%d:%02d:%02d" % (h, m, s)
+
+
+def parse_args():
+    parser = argparse.ArgumentParser(description='Usergrid Loader - Queue Monitor')
+
+    parser.add_argument('-c', '--config',
+                        help='The queue to load into',
+                        type=str,
+                        default='4g.json')
+
+    my_args = parser.parse_args(sys.argv[1:])
+
+    print str(my_args)
+
+    return vars(my_args)
+
+
+def check_exists(sqs_message):
+    # checks whether an entity is deleted.  if the entity is found then it prints an error
message.
+    # this was used when there were many messages going to DLQ and the reason was because
the entity had been deleted
+    try:
+        message = json.loads(sqs_message.get_body())
+    except ValueError:
+        print 'Unable to decode JSON: %s' % sqs_message.get_body()
+        return
+    try:
+        for event_name, event_data in message.iteritems():
+            entity_id_scope = event_data.get('entityIdScope')
+            app_id = entity_id_scope.get('applicationScope', {}).get('application', {}).get('uuid')
+            entity_id_scope = entity_id_scope.get('id')
+            entity_id = entity_id_scope.get('uuid')
+            entity_type = entity_id_scope.get('type')
+
+            url = 'http://localhost:8080/{app_id}/{entity_type}/{entity_id}'.format(
+                app_id=app_id,
+                entity_id=entity_id,
+                entity_type=entity_type
+            )
+
+            url = 'https://{host}/{basepath}/{app_id}/{entity_type}/{entity_id}'.format(
+                host='REPLACE',
+                basepath='REPLACE',
+                app_id=app_id,
+                entity_id=entity_id,
+                entity_type=entity_type
+            )
+
+            r = requests.get(url=url,
+                             headers={
+                                 'Authorization': 'Bearer XCA'
+                             })
+
+            if r.status_code != 404:
+                print 'ERROR/FOUND [%s]: %s' % (r.status_code, url)
+            else:
+                print '[%s]: %s' % (r.status_code, url)
+                deleted = sqs_conn.delete_message_from_handle(sqs_queue, sqs_message.receipt_handle)
+                if not deleted:
+                    print 'no delete!'
+
+    except KeyboardInterrupt, e:
+        raise e
+
+
+def main():
+    global sqs_conn, sqs_queue
+    args = parse_args()
+
+    start_time = datetime.datetime.utcnow()
+    first_start_time = start_time
+
+    print "first start: %s" % first_start_time
+
+    with open(args.get('config'), 'r') as f:
+        config = json.load(f)
+
+    sqs_config = config.get('sqs')
+
+    sqs_conn = boto.sqs.connect_to_region(**sqs_config)
+    queue_name = 'baas20sr_usea_baas20sr_usea_index_all_dead'
+    sqs_queue = sqs_conn.get_queue(queue_name)
+
+    last_size = sqs_queue.count()
+
+    print 'Last Size: ' + str(last_size)
+
+    pool = Pool(10)
+
+    keep_going = True
+
+    while keep_going:
+        sqs_messages = sqs_queue.get_messages(
+            num_messages=10,
+            visibility_timeout=10,
+            wait_time_seconds=10)
+
+        if len(sqs_messages) > 0:
+            pool.map(check_exists, sqs_messages)
+        else:
+            print 'DONE!'
+            pool.terminate()
+            keep_going = False
+
+
+if __name__ == '__main__':
+    main()

http://git-wip-us.apache.org/repos/asf/usergrid/blob/32f9e55d/utils/usergrid-util-python/usergrid_tools/queue/dlq_requeue.py
----------------------------------------------------------------------
diff --git a/utils/usergrid-util-python/usergrid_tools/queue/dlq_requeue.py b/utils/usergrid-util-python/usergrid_tools/queue/dlq_requeue.py
new file mode 100644
index 0000000..0b22770
--- /dev/null
+++ b/utils/usergrid-util-python/usergrid_tools/queue/dlq_requeue.py
@@ -0,0 +1,173 @@
+import argparse
+import json
+import datetime
+import os
+import time
+import sys
+import uuid
+from Queue import Empty
+
+import boto
+from boto import sqs
+from multiprocessing import Process, Queue
+
+from boto.sqs.message import RawMessage
+
+__author__ = 'Jeff West @ ApigeeCorporation'
+
+
+def total_seconds(td):
+    return (td.microseconds + (td.seconds + td.days * 24.0 * 3600) * 10.0 ** 6) / 10.0 **
6
+
+
+def total_milliseconds(td):
+    return (td.microseconds + td.seconds * 1000000) / 1000
+
+
+def get_time_remaining(count, rate):
+    if rate == 0:
+        return 'NaN'
+
+    seconds = count * 1.0 / rate
+
+    m, s = divmod(seconds, 60)
+    h, m = divmod(m, 60)
+
+    return "%d:%02d:%02d" % (h, m, s)
+
+
+def parse_args():
+    parser = argparse.ArgumentParser(description='Usergrid Loader - Queue Monitor')
+
+    parser.add_argument('--readers',
+                        help='The queue to load into',
+                        type=int,
+                        default=10)
+
+    parser.add_argument('--writers',
+                        help='The queue to load into',
+                        type=int,
+                        default=10)
+
+    parser.add_argument('-c', '--config',
+                        help='The queue to load into',
+                        type=str,
+                        default='%s/.usergrid/queue_monitor.json' % os.getenv("HOME"))
+
+    parser.add_argument('--source_queue_name',
+                        help='The queue name to send messages to.  If not specified the filename
is used',
+                        default='entities',
+                        type=str)
+
+    parser.add_argument('--target_queue_name',
+                        help='The queue name to send messages to.  If not specified the filename
is used',
+                        default='entities',
+                        type=str)
+
+    my_args = parser.parse_args(sys.argv[1:])
+
+    print str(my_args)
+
+    return vars(my_args)
+
+
+class Writer(Process):
+    def __init__(self, queue_name, sqs_config, work_queue):
+        super(Writer, self).__init__()
+        self.queue_name = queue_name
+        self.sqs_config = sqs_config
+        self.work_queue = work_queue
+
+    def run(self):
+        sqs_conn = boto.sqs.connect_to_region(**self.sqs_config)
+
+        sqs_queue = sqs_conn.get_queue(self.queue_name)
+        sqs_queue.set_message_class(RawMessage)
+        counter = 0
+
+        # note that there is a better way but this way works.  update would be to use the
batch interface
+
+        batch = []
+
+        while True:
+            try:
+                body = self.work_queue.get(timeout=10)
+                counter += 1
+
+                if counter % 100 == 1:
+                    print 'WRITER %s' % counter
+
+                batch.append((str(uuid.uuid1()), body, 0))
+
+                if len(batch) >= 10:
+                    print 'WRITING BATCH'
+                    sqs_queue.write_batch(batch, delay_seconds=300)
+                    batch = []
+
+            except Empty:
+
+                if len(batch) > 0:
+                    print 'WRITING BATCH'
+                    sqs_queue.write_batch(batch, delay_seconds=300)
+                    batch = []
+
+
+class Reader(Process):
+    def __init__(self, queue_name, sqs_config, work_queue):
+        super(Reader, self).__init__()
+        self.queue_name = queue_name
+        self.sqs_config = sqs_config
+        self.work_queue = work_queue
+
+    def run(self):
+
+        sqs_conn = boto.sqs.connect_to_region(**self.sqs_config)
+
+        sqs_queue = sqs_conn.get_queue(self.queue_name)
+        sqs_queue.set_message_class(RawMessage)
+
+        message_counter = 0
+
+        while True:
+
+            messages = sqs_queue.get_messages(num_messages=10)
+            print 'Read %s messages' % (len(messages))
+            for message in messages:
+                message_counter += 1
+
+                if message_counter % 100 == 1:
+                    print 'READ: %s' % message_counter
+
+                body = message.get_body()
+                self.work_queue.put(body)
+
+            sqs_queue.delete_message_batch(messages)
+
+
+def main():
+    args = parse_args()
+
+    source_queue_name = args.get('source_queue_name')
+    target_queue_name = args.get('target_queue_name')
+
+    start_time = datetime.datetime.utcnow()
+    first_start_time = start_time
+
+    print "first start: %s" % first_start_time
+
+    with open(args.get('config'), 'r') as f:
+        config = json.load(f)
+
+    sqs_config = config.get('sqs')
+
+    work_queue = Queue()
+
+    readers = [Reader(source_queue_name, sqs_config, work_queue) for r in xrange(args.get('readers'))]
+    [r.start() for r in readers]
+
+    writers = [Writer(target_queue_name, sqs_config, work_queue) for r in xrange(args.get('writers'))]
+    [w.start() for w in writers]
+
+
+if __name__ == '__main__':
+    main()

http://git-wip-us.apache.org/repos/asf/usergrid/blob/32f9e55d/utils/usergrid-util-python/usergrid_tools/queue/queue-config-sample.json
----------------------------------------------------------------------
diff --git a/utils/usergrid-util-python/usergrid_tools/queue/queue-config-sample.json b/utils/usergrid-util-python/usergrid_tools/queue/queue-config-sample.json
new file mode 100644
index 0000000..2b20d71
--- /dev/null
+++ b/utils/usergrid-util-python/usergrid_tools/queue/queue-config-sample.json
@@ -0,0 +1,22 @@
+{
+  "ug_base_url": "<your usergrid api endpoint>",
+
+  "sqs": {
+    "region_name": "<aws region for using SQS>",
+    "aws_access_key_id": "<aws key for: creating queue, writing messages>",
+    "aws_secret_access_key": "<aws secret>"
+  },
+
+  "credential_map": {
+    "example1":{''
+      "comments": "for each org you want to load/publish entities to there should be an entry
in this map with the org name as the key and the client_id and secret to use when publishing
entities",
+      "client_id": "<client_id>",
+      "client_secret": "<client_secret>"
+    },
+    "example2":{
+      "comments": "for each org you want to load/publish entities to there should be an entry
in this map with the org name as the key and the client_id and secret to use when publishing
entities",
+      "client_id": "<client_id>",
+      "client_secret": "<client_secret>"
+    }
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/usergrid/blob/32f9e55d/utils/usergrid-util-python/usergrid_tools/queue/queue_cleaner.py
----------------------------------------------------------------------
diff --git a/utils/usergrid-util-python/usergrid_tools/queue/queue_cleaner.py b/utils/usergrid-util-python/usergrid_tools/queue/queue_cleaner.py
new file mode 100644
index 0000000..7f30d06
--- /dev/null
+++ b/utils/usergrid-util-python/usergrid_tools/queue/queue_cleaner.py
@@ -0,0 +1,155 @@
+import argparse
+import json
+import datetime
+import os
+import time
+import sys
+
+import boto
+from boto import sqs
+from multiprocessing import Process, Queue
+
+__author__ = 'Jeff West @ ApigeeCorporation'
+
+
+def total_seconds(td):
+    return (td.microseconds + (td.seconds + td.days * 24.0 * 3600) * 10.0 ** 6) / 10.0 **
6
+
+
+def total_milliseconds(td):
+    return (td.microseconds + td.seconds * 1000000) / 1000
+
+
+def get_time_remaining(count, rate):
+    if rate == 0:
+        return 'NaN'
+
+    seconds = count * 1.0 / rate
+
+    m, s = divmod(seconds, 60)
+    h, m = divmod(m, 60)
+
+    return "%d:%02d:%02d" % (h, m, s)
+
+
+def parse_args():
+    parser = argparse.ArgumentParser(description='Usergrid Loader - Queue Monitor')
+
+    parser.add_argument('-c', '--config',
+                        help='The queue to load into',
+                        type=str,
+                        default='%s/.usergrid/queue_monitor.json' % os.getenv("HOME"))
+
+    parser.add_argument('-q', '--queue_name',
+                        help='The queue name to send messages to.  If not specified the filename
is used',
+                        default='entities',
+                        type=str)
+
+    my_args = parser.parse_args(sys.argv[1:])
+
+    print str(my_args)
+
+    return vars(my_args)
+
+
+class Deleter(Process):
+    def __init__(self, queue_name, sqs_config, work_queue):
+        super(Deleter, self).__init__()
+        self.queue_name = queue_name
+        self.sqs_config = sqs_config
+        self.work_queue = work_queue
+
+    def run(self):
+        sqs_conn = boto.sqs.connect_to_region(**self.sqs_config)
+
+        # queue = sqs_conn.get_queue(self.queue_name)
+
+        while True:
+                delete_me = self.work_queue.get()
+                delete_me.delete()
+                print 'foo'
+
+
+class Worker(Process):
+    def __init__(self, queue_name, sqs_config, delete_queue):
+        super(Worker, self).__init__()
+        self.queue_name = queue_name
+        self.sqs_config = sqs_config
+        self.delete_queue = delete_queue
+
+    def run(self):
+
+        sqs_conn = boto.sqs.connect_to_region(**self.sqs_config)
+
+        queue = sqs_conn.get_queue(self.queue_name)
+
+        last_size = queue.count()
+
+        print 'Starting Size: %s' % last_size
+
+        delete_counter = 0
+        message_counter = 0
+
+        while True:
+
+            messages = queue.get_messages(num_messages=10, visibility_timeout=300)
+
+            for message in messages:
+                message_counter += 1
+                body = message.get_body()
+
+                try:
+
+                    msg = json.loads(body)
+
+                    if 'entityDeleteEvent' in msg:
+                        if msg['entityDeleteEvent']['entityIdScope']['id']['type'] == 'stock':
+
+                            self.delete_queue.put(message)
+                            delete_counter += 1
+
+                            if delete_counter % 100 == 0:
+                                print 'Deleted %s of %s' % (delete_counter, message_counter)
+                    else:
+                        # set it eligible to be read again
+                        message.change_visibility(0)
+
+                        print json.dumps(msg)
+
+                except:
+                    pass
+
+
+
+
+def main():
+    args = parse_args()
+
+    queue_name = args.get('queue_name')
+
+    print 'queue_name=%s' % queue_name
+
+    start_time = datetime.datetime.utcnow()
+    first_start_time = start_time
+
+    print "first start: %s" % first_start_time
+
+    with open(args.get('config'), 'r') as f:
+        config = json.load(f)
+
+    sqs_config = config.get('sqs')
+    last_time = datetime.datetime.utcnow()
+
+    work_queue = Queue()
+
+    deleters = [Deleter(queue_name, sqs_config, work_queue) for x in xrange(100)]
+    [w.start() for w in deleters]
+
+    workers = [Worker(queue_name, sqs_config, work_queue) for x in xrange(100)]
+
+    [w.start() for w in workers]
+
+    time.sleep(60)
+
+if __name__ == '__main__':
+    main()

http://git-wip-us.apache.org/repos/asf/usergrid/blob/32f9e55d/utils/usergrid-util-python/usergrid_tools/redis/redis_iterator.py
----------------------------------------------------------------------
diff --git a/utils/usergrid-util-python/usergrid_tools/redis/redis_iterator.py b/utils/usergrid-util-python/usergrid_tools/redis/redis_iterator.py
new file mode 100644
index 0000000..7edf1fc
--- /dev/null
+++ b/utils/usergrid-util-python/usergrid_tools/redis/redis_iterator.py
@@ -0,0 +1,30 @@
+import json
+from collections import defaultdict
+
+import redis
+import time
+
+cache = redis.StrictRedis(host='localhost', port=6379, db=0)
+# cache.flushall()
+
+ecid_counter = defaultdict(int)
+counter = 0
+
+for key in cache.scan_iter(match='*visited'):
+
+    # print key
+    parts = key.split(':')
+    ecid = parts[0]
+
+    if ecid != 'd22a6f10-d3ef-47e3-bbe3-e1ccade5a241':
+        cache.delete(key)
+        ecid_counter[ecid] += 1
+        counter +=1
+
+        if counter % 100000 == 0 and counter != 0:
+            print json.dumps(ecid_counter, indent=2)
+            print 'Sleeping...'
+            time.sleep(60)
+            print 'AWAKE'
+
+print json.dumps(ecid_counter, indent=2)

http://git-wip-us.apache.org/repos/asf/usergrid/blob/32f9e55d/utils/usergrid-util-python/usergrid_tools/redis/redisscan.py
----------------------------------------------------------------------
diff --git a/utils/usergrid-util-python/usergrid_tools/redis/redisscan.py b/utils/usergrid-util-python/usergrid_tools/redis/redisscan.py
new file mode 100644
index 0000000..8957861
--- /dev/null
+++ b/utils/usergrid-util-python/usergrid_tools/redis/redisscan.py
@@ -0,0 +1,15 @@
+import redis
+
+r = redis.Redis("localhost", 6379)
+for key in r.scan_iter():
+    # print '%s: %s' % (r.ttl(key), key)
+
+    if key[0:4] == 'http':
+        r.set(key, 1)
+        # print 'set value'
+
+    if r.ttl(key) > 3600 \
+            or key[0:3] in ['v3:', 'v2', 'v1'] \
+            or ':visited' in key:
+        r.delete(key)
+        print 'delete %s' % key


Mime
View raw message