hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "anishek (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HIVE-16591) DR for function Binaries on HDFS
Date Wed, 10 May 2017 09:43:04 GMT

     [ https://issues.apache.org/jira/browse/HIVE-16591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

anishek updated HIVE-16591:
---------------------------
    Description: 
# We have to make sure that during incremental dump we dont allow functions to be copied if
they have local filesystem "file://" resources.  -- depends how much system side work we want
to do, We are going to explicitly provide a caveat for replicating functions where in, only
functions created "using" clause will be replicated and the "using" clause prohibits creating
functions with the local "file://"  resources and hence doing additional checks when doing
repl dump might not be required. 

# We have to make sure that during the bootstrap / incremental dump we append the name node
host + port  if functions are created without the fully qualified location of uri on hdfs,
not sure how this would play for S3 or WASB filesystem.

# We have to copy the binaries of a function resource list on CREATE / DROP FUNCTION . The
change management file system has to keep a copy of the binary when a DROP function is called,
to provide capability of updating binary definition for existing functions along with DR.
An example of list of steps is given in doc (ReplicateFunctions.pdf ) attached in  parent
Issue.

  was:
* We have to make sure that during incremental dump we dont allow functions to be copied if
they have local filesystem "file://" resources.  -- depends how much system side work we want
to do, We are going to explicitly provide a caveat for replicating functions where in, only
functions created "using" clause will be replicated and the "using" clause prohibits creating
functions with the local "file://"  resources and hence doing additional checks when doing
repl dump might not be required. 

* We have to make sure that during the bootstrap / incremental dump we append the name node
host + port  if functions are created without the fully qualified location of uri on hdfs.



> DR for function Binaries on HDFS 
> ---------------------------------
>
>                 Key: HIVE-16591
>                 URL: https://issues.apache.org/jira/browse/HIVE-16591
>             Project: Hive
>          Issue Type: Sub-task
>          Components: HiveServer2
>    Affects Versions: 3.0.0
>            Reporter: anishek
>            Assignee: anishek
>
> # We have to make sure that during incremental dump we dont allow functions to be copied
if they have local filesystem "file://" resources.  -- depends how much system side work we
want to do, We are going to explicitly provide a caveat for replicating functions where in,
only functions created "using" clause will be replicated and the "using" clause prohibits
creating functions with the local "file://"  resources and hence doing additional checks when
doing repl dump might not be required. 
> # We have to make sure that during the bootstrap / incremental dump we append the name
node host + port  if functions are created without the fully qualified location of uri on
hdfs, not sure how this would play for S3 or WASB filesystem.
> # We have to copy the binaries of a function resource list on CREATE / DROP FUNCTION
. The change management file system has to keep a copy of the binary when a DROP function
is called, to provide capability of updating binary definition for existing functions along
with DR. An example of list of steps is given in doc (ReplicateFunctions.pdf ) attached in
 parent Issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message