I haven't tried this but the best guidance I can give is the following:
1. Create an appropriate decoder using Avro's DecoderFactory 
2. Construct an arrow adapter with a schema and the decoder. There are some examples in the unit tests .
3. Adapt the method described by Uwe describes in his blog-post about JDBC  to using the adapter. From there I think you can use the tensorflow APIs (sorry I've not used them but my understanding is TF only has python APIs?)
If number 3 doesn't work for you due to environment constraints, you could write out an Arrow file using the file writer  and try to see if examples listed in  help.
ne thing to note is, I believe the Avro adapter library currently has an impedance mismatch with the ArrowFileWriter. The Adapter returns an new VectorStreamRoot per batch, and the Writer libraries are designed around loading/unloading a single VectorSchemaRoot. I think the method with the least overhead for transferring is the data is to create a VectorUnloader  per VectorSchemaRoot, convert it to a record batch and then load it into the Writer's VectorSchemaRoot. This will unfortunately cause some amount of memory churn due to extra allocations.
There is a short overview of working with Arrow generally available at 
Hope this helps,