scala - Large file download with Play framework -
i have sample download code works fine if file not zipped because know length , when provide, think while streaming play not have bring whole file in memory , works. below code works
def downloadlocalbackup() = action { var pathoffile = "/opt/mydir/backups/big/backup" val file = new java.io.file(pathoffile) val path: java.nio.file.path = file.topath val source: source[bytestring, _] = fileio.frompath(path) logger.info("from local backup set length in header "+file.length()) ok.sendentity(httpentity.streamed(source, some(file.length()), some("application/zip"))).withheaders("content-disposition" -> s"attachment; filename=backup") }
i don't know how streaming in above case takes care of difference in speed between disk reads(which faster network). never runs out of memory large files. when use below code, has zipoutput stream not sure of reason run out of memory. somehow same 3gb file when try use zip stream, not working.
def downloadlocalbackup2() = action { var pathoffile = "/opt/mydir/backups/big/backup" val file = new java.io.file(pathoffile) val path: java.nio.file.path = file.topath val enumerator = enumerator.outputstream { os => val zipstream = new zipoutputstream(os) zipstream.putnextentry(new zipentry("backup2")) val = new bufferedinputstream(new fileinputstream(pathoffile)) val buf = new array[byte](1024) var len = is.read(buf) var totallength = 0l; var logged = false; while (len >= 0) { zipstream.write(buf, 0, len) len = is.read(buf) if (!logged) { logged = true; logger.info("logging while loop 1 time") } } is.close zipstream.close() } logger.info("log right before sendentity") val kk = ok.sendentity(httpentity.streamed(source.frompublisher(streams.enumeratortopublisher(enumerator)).map(x => { val kk = writeable.wbytearray.transform(x); kk }), none, some("application/zip")) ).withheaders("content-disposition" -> s"attachment; filename=backupfile.zip") kk }
in first example, akka streams handles details you. knows how read input stream without loading complete file in memory. advantage of using akka streams explained in docs:
the way consume services internet today includes many instances of streaming data, both downloading service uploading or peer-to-peer data transfers. regarding data stream of elements instead of in entirety useful because matches way computers send , receive them (for example via tcp), necessity because data sets become large handled whole. spread computations or analyses on large clusters , call “big data”, whole principle of processing them feeding data sequentially—as stream—through cpus.
...
the purpose [of akka streams] offer intuitive , safe way formulate stream processing setups such can execute them efficiently , bounded resource usage—no more outofmemoryerrors. in order achieve our streams need able limit buffering employ, need able slow down producers if consumers cannot keep up. feature called back-pressure , @ core of reactive streams initiative of akka founding member.
at second example, handling input/output streams yourself, using standard blocking api. i'm not 100% sure how writing zipoutputstream
works here, possible not flushing writes , accumulating before close
.
good thing don't need handle manually since akka streams provides way gzip source
of bytestring
s:
import javax.inject.inject import akka.util.bytestring import akka.stream.scaladsl.{compression, fileio, source} import play.api.http.httpentity import play.api.mvc.{basecontroller, controllercomponents} class foocontroller @inject()(val controllercomponents: controllercomponents) extends basecontroller { def download = action { val pathoffile = "/opt/mydir/backups/big/backup" val file = new java.io.file(pathoffile) val path: java.nio.file.path = file.topath val source: source[bytestring, _] = fileio.frompath(path) val gzipped = source.via(compression.gzip) ok.sendentity(httpentity.streamed(gzipped, some(file.length()), some("application/zip"))).withheaders("content-disposition" -> s"attachment; filename=backup") } }
Comments
Post a Comment