ScaDaMaLe Course site and book

Write files with animal names continuously for structured streaming

This notebook can be used to write files every 2 seconds into the distributed file system where each of these files contains a row given by the time stamp and two animals chosen at random from six animals in a animals.txt file in the driver.

After running the commands in this notebook you should have a a set of files named by the minute and second for easy setting up of structured streaming jobs in another notebook. This is mainly to create a structured streaming of files for learning purposes. In a real situation, you will have such streams coming from more robust ingestion frameworks such as kafka queues.

It is a good idea to understand how to run executibles from the driver to set up a stream of files for ingestion in structured streaming tasks down stream.

The following seven steps (Steps 0-6) can be used in more complex situations like running a more complex simulator from an executible file.

Step 0

let's get our bearings and prepare for setting up a structured streaming from files.

Just find the working directory using %sh.

pwd
/databricks/driver

We are in databricks/driver directory.

To run the script and be able to kill it you need a few installs.

apt-get install -y psmisc
Reading package lists...
Building dependency tree...
Reading state information...
psmisc is already the newest version (23.1-1ubuntu0.1).
psmisc set to manually installed.
The following packages were automatically installed and are no longer required:
  libcap2-bin libpam-cap zulu-repo
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Step 1

Let's first make the animals.txt file in the driver.

rm -f animals.txt &&
echo "cat" >> animals.txt &&
echo "dog" >> animals.txt &&
echo "owl" >> animals.txt &&
echo "pig" >> animals.txt &&
echo "bat" >> animals.txt &&
echo "rat" >> animals.txt &&
cat animals.txt
cat
dog
owl
pig
bat
rat

Step 2

Now let's make a bash shell script that can be executed every two seconds to produce the desired .log files with names prepended by minute and second inside the local directory logsEvery2Secs. Each line the file every2SecsRndWordsInFiles.sh is explained line by line:

  • #!/bin/bash is how we tell that this is a bash script which needs the /bin/bash binary. I remember the magic two characters #! as "SHA-BANG" for "hash" for # and "bang" for !
  • rm -f every2SecsRndWordsInFiles.sh && forcefully removes the file every2SecsRndWordsInFiles.sh and && executes the command preceeding it before going to the next line
  • echo "blah" >> every2SecsRndWordsInFiles.sh just spits out the content of the string, i.e., blah, in append mode due to >> into the file every2SecsRndWordsInFiles.sh

The rest of the commands simply create a fresh directory logsEvery2Secs and write two randomly chosen animals from the animals.txt file into the directory logsEvery2Secs with .log file names preceeded by minute and second of current time to make a finite number of file names (at most 3600 unique .log filenames).

rm -f every2SecsRndWordsInFiles.sh &&
echo "#!/bin/bash" >> every2SecsRndWordsInFiles.sh &&
echo "rm -rf logsEvery2Secs" >> every2SecsRndWordsInFiles.sh &&
echo "mkdir -p logsEvery2Secs" >> every2SecsRndWordsInFiles.sh &&
echo "while true; do echo \$( date --rfc-3339=second )\; | cat - <(shuf -n2 animals.txt) | sed '$!{:a;N;s/\n/ /;ta}' > logsEvery2Secs/\$( date '+%M_%S.log' ); sleep 2; done" >> every2SecsRndWordsInFiles.sh &&
cat every2SecsRndWordsInFiles.sh
#!/bin/bash
rm -rf logsEvery2Secs
mkdir -p logsEvery2Secs
while true; do echo $( date --rfc-3339=second )\; | cat - <(shuf -n2 animals.txt) | sed '{:a;N;s/\n/ /;ta}' > logsEvery2Secs/$( date '+%M_%S.log' ); sleep 2; done

Step 3

Time to run the script!

The next two cells in %sh do the following:

  • makes sure the BASH script every2SecsRndWordsInFiles.sh is executible
  • run the script in the background without hangup
chmod 744 every2SecsRndWordsInFiles.sh
nohup ./every2SecsRndWordsInFiles.sh & 

After executing the above cell hit the cancel button above to get the notebook process back. The BASH shell will still be running in the background as you can verify by evaluating the cell below to get the time-stamped file names inside the logsEvery2Secs directory.

Step 4

Check that everything is running as expected.

pwd
ls -al logsEvery2Secs
/databricks/driver
total 196
drwxr-xr-x 2 root root 4096 Nov 20 13:21 .
drwxr-xr-x 1 root root 4096 Nov 20 13:20 ..
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_10.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_12.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_14.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_16.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_18.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_20.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_22.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_24.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_26.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_28.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_30.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_32.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_34.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_36.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_38.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_40.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_42.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_44.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_46.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_48.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_50.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_52.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_54.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_56.log
-rw-r--r-- 1 root root   35 Nov 20 13:20 20_58.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_00.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_02.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_04.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_06.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_08.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_10.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_12.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_14.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_16.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_18.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_20.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_22.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_24.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_26.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_28.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_30.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_32.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_34.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_36.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_38.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_40.log
-rw-r--r-- 1 root root   35 Nov 20 13:21 21_42.log
cat logsEvery2Secs/21_42.log
2020-11-20 13:21:42+00:00; owl dog

Step 5

Next, let us prepare the distibuted file system for ingesting this data by a simple dbutils.cp command in a for loop with a 5 second delay between each copy from the local file system where the BASH script is writing to.

We use this method of running a BASH script and copying from the local file system to the distributed one in order to mimic arbirary file contents by merely changing the bash script.

dbutils.fs.rm("/datasets/streamingFiles/",true) // this is to delete the directory before staring a job
res0: Boolean = true
var a = 0;
// for loop execution to move files from local fs to distributed fs
for( a <- 1 to 60*60/5){ 
  // you may need to replace 60*60/5 above by a smaller number like 10 or 20 in the CE depending on how many files of your quota you have used up already
  dbutils.fs.cp("file:///databricks/driver/logsEvery2Secs/","/datasets/streamingFiles/",true)
  Thread.sleep(5000L) // sleep 5 seconds
}

Step 6

When you are done with this streaming job it is important that you cancel the above cell if it is still running and also terminate the BASH shell every2SecsRndWordsInFiles.sh in the cell below to prevent it from running "for ever"!

In fact, you can execture the next cell before leaving this notebook so that the job gets killed once the above for loop finishes after an hour. You may need to remove the // in the next cell before killing the bash job.

killall every2SecsRndWordsInFiles.sh

ScaDaMaLe Course site and book

Write files periodically with normal mixture samples for structured streaming

This notebook can be used to write files every few seconds into the distributed file system where each of these files contains a time stamp field followed by randomly drawn words.

After running the commands in this notebook you should have a a set of files named by the minute and second for easy setting up of structured streaming jobs in another notebook.

Mixture of 2 Normals

Here we will write some Gaussian mixture samples to files.

import scala.util.Random
import scala.util.Random._

// make a sample to produce a mixture of two normal RVs with standard deviation 1 but with different location or mean parameters
def myMixtureOf2Normals( normalLocation: Double, abnormalLocation: Double, normalWeight: Double, r: Random) : (String, Double) = {
  val sample = if (r.nextDouble <= normalWeight) {r.nextGaussian+normalLocation } 
               else {r.nextGaussian + abnormalLocation} 
  Thread.sleep(5L) // sleep 5 milliseconds
  val now = (new java.text.SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS")).format(new java.util.Date())
  return (now,sample)
   }
import scala.util.Random
import scala.util.Random._
myMixtureOf2Normals: (normalLocation: Double, abnormalLocation: Double, normalWeight: Double, r: scala.util.Random)(String, Double)
val r = new Random(1L)
println(myMixtureOf2Normals(1.0, 10.0, 0.99, r), myMixtureOf2Normals(1.0, 10.0, 0.99, r))
// should always produce samples as (0.5876430182311466,-0.34037937678788865) when seed = 1L
((2020-11-16 10:47:55.774,0.5876430182311466),(2020-11-16 10:47:55.780,-0.34037937678788865))
r: scala.util.Random = scala.util.Random@cb2fa8b
display(sc.parallelize(Vector.fill(1000){myMixtureOf2Normals(1.0, 10.0, 0.99, r)}).toDF.select("_2")) // histogram of 1000 samples
_2
1.63847575097573
0.8497955378433464
1.0173381805959432
-2.6960935205721848e-2
-6.096818465288045e-2
0.6235321652739503
1.1594225593708558
2.6812781205628102
2.3144624015522943
3.2746230371718874
0.6239556140200029
0.6428284914761508
-0.42618967795971496
0.4090774320731605
0.731226227370048
1.392728206581036
1.3354355936933495
0.17821385872329187
-0.23317608061362294
0.47289802886431465
-1.9401934414671596
10.214120281108658
1.892684662207417
1.0166947170672929
2.2709372842290798e-2
2.1067186310892803
-0.2704224394550252
1.1899806078296409
1.9798405611441416
1.674277523545705
1.7051472629603235
0.30230911121326054
1.3131677682296694
0.6862907308874912
1.5472597655237206
-0.21817241232347429
2.7926715721364213
1.076185529891312
1.553330079653454
1.8513287666787783
0.9573896481551991
5.970253671115444e-2
1.1541469640881448
0.2802123113351551
-0.5344837021313593
1.725283845177347
1.755161663154593
-0.5917988890618884
-0.16673131970385602
-0.8875029732034849
0.8698903603546103
2.114856053952625
2.497996116473529
9.58947465374601
2.9441080471551846
2.5155704683949165
0.17630763054020993
0.18281600980713353
1.9274188319901921
-0.5948223586642365
-1.2010559428888148
0.39178009640886724
1.8700313397445891
1.9822181107950154
1.2988415493692078
0.19172302714886202
1.8821092899972611
1.3022299642012503
1.4752832590499911
0.3609863067031439
1.9323022237840024
1.69205485183195
2.837382612764204
0.1910603148124741
0.341728700100859
2.2890102177414597
0.9364985206830795
-9.834860359562425e-2
1.9684980059019335
2.1078374167392218
0.8418311806515589
-6.692278803103968e-2
3.0610863387705947
2.5970193768494565
1.7322441665190011
1.906475635230355
0.7904177135359902
2.251629313442862
0.821157571251303
0.604717877958487
-2.786225541637921e-2
0.25604168086650014
2.227036911451595
0.5553767689608333
1.2188030914528114
0.4514774439330318
0.6543905674437465
0.40245654155097643
0.5060580393139191
0.39607300301218784
1.2148439132942364
-1.1732951662319397
2.73880972901814
1.8021623567418805
1.7158266876051977
-5.753772099978405e-2
0.8372348006260973
1.0550026092093487
-0.195219134074764
0.8383022815662486
1.2379684980482244
-0.9744247939864081
0.34334678488869397
0.9559281181537529
0.1404041195836977
-0.3013816873777868
2.3896906481802835
0.32642106560403683
1.1731296744694082
1.5569305149964348
1.5889387851920198
0.4731172095886468
1.8856173342031206
2.3639372705821358
-4.477980164340978e-2
1.0290704620759326
1.9189177950037783
-1.0270431347512887
0.29509917268188934
0.2778311015556939
2.0003447521256588
1.1150669950847454
0.8896045579010659
0.6672200055719166
2.835109610406893
0.7007330220501728
0.9750338145448016
1.5645635468047536
0.9674358170359617
0.25227970595044513
1.375957412863052
1.308418670787698
1.9419970123581474
0.8365053262598314
1.9114557211166892
3.8001770349623225
0.8274099015781488
0.8067385461002958
-1.4655370928162954
1.8238007602460073
0.5594066359278937
1.6110515128363136
0.6652500763228728
1.0793393655154422
2.245390820036886
0.35470134173051693
1.2952852735590668
1.8610829534004827
0.33089671577831126
1.1590236911322995
2.2514554017998076
1.3216086127545497
0.5881415761335955
3.0225970797989596
1.9673222329020743
1.9658660340337613
1.8710008504436737
1.845446759447753
0.14370718520286974
1.908535196201739
1.0842042894798714
2.3715209552442835
1.5054700276157238
3.2507198636175434
1.2191172203825102
0.8790791380073175
-1.0037247733674635
0.9049623487323446
1.2089925367368337
0.6882976824476612
2.5434935418288713
1.0027620856353483
-0.27959350084891055
0.3279655527731484
1.5786118902182438
-0.39629416278043617
1.1770224496224355
2.383953754814976
1.091832388214859
1.6301831536311209
0.935425666376106
2.1401208410667696
1.5922472337477118
0.34768463336762756
3.4257932035179848
0.7149821525322468
1.5415488633926249
1.6496390565891736
0.9593424220951506
0.6518631532504793
3.403687949320813
0.8320258402390563
0.12142126647709839
3.597300820980316
0.14741363170749655
0.8462912401166103
1.0584643703479584
1.1487138176792762
0.3488425183144027
1.391612358634887
1.505017789786627
0.5244087078660062
0.2705096241707795
2.216688343494393
-1.6589238926022496
1.708690947086617
2.961208276039951
6.522128902971014e-2
1.930831478373714
1.1607184811130336
1.6688339248285566
1.6470200278340348
2.449696037385742
0.8251476817941197
1.40403402944069
1.2942215447658563
1.7120981908967403
-8.929122874418227e-2
3.7731683503114843
-6.239748698038494e-2
1.1840870589278099
1.441608712447355
1.208878651490341
1.8920599151149384
2.527569684835118e-2
0.38124154100408325
-0.2393929330098632
0.1338106966810938
0.2676880074695901
2.1281668027663505
0.3098286754011822
0.4585965277423747
-2.5930812376542933e-2
2.083732337529618
0.6731348015628896
0.1312853060580106
1.6205120279909435
3.665204102481068e-2
0.38840609319995334
0.16412699262051766
1.0044846177991755
0.45217422200704216
-0.18519175139069355
-0.29646999575663124
0.47945009163424923
1.7615776520354332
0.35352511094407846
0.5902336217395934
1.7471513985001184
0.4125022414098265
0.7484837707176137
2.4227348582478037
1.0572595828548697
3.0750919548401128
1.0066299184508531
1.1704717372072126e-2
0.5249712753060873
-0.3285040378625381
1.342275367397042
0.4464461744940117
1.8011252317891455
0.814252403591333
1.394324297666107
1.746173303844829
0.9316948966812066
0.200871837759239
0.21504170344279683
1.3508727155177145
1.5029472339851955
1.3410205427121795
-0.29787456681066304
0.5368515430517165
0.4903528739420413
1.385472033421005
-0.2770976989604108
0.14383609785529106
2.136175000080348
1.3623047924484522
1.7982361871358026
0.31744493918444716
1.6350814336974835
2.1281399761210897
2.502566064523463
-0.24563123892361882
1.1658851718667056
0.9774056737281539
-0.9759524597979035
1.189619858484149
1.5319416027989528
2.1398379560783063
-0.8795632577020192
0.9740025013919826
2.1670377423670066
1.8772156837816794
1.4824573418919358
2.0936166431428
0.48330193647070885
1.1693862084127395
2.9630411557020633
0.7910395376475294
2.2572758761916054
0.5908398656455651
1.0801869359536456
1.6034174887237207
2.06427635708377
1.7447902837676048
2.031505217347445
0.7818721634007395
0.7546588629356998
0.21941395803984998
1.269969127498187
0.6995395205381358
1.6248745281288801
0.8263012944805543
1.4386440510670049
2.190301845034811
0.21078498872972096
0.9513469983933799
1.7125428803482596
1.398303536566959
1.1206534766009186
2.290467145367123
4.7109383568383345e-3
2.294594854357012
1.4766168147668075
1.1557121731747513
1.9080418901330192
1.8649147337827425
1.4389071560055176
0.10082531163297859
0.9702568121268303
2.7799453357863033
2.24549312231816
0.5235703215957042
4.74734605167747e-2
6.947954022632907e-2
0.3108356478112668
0.8394845735542491
1.6339257039230628
-7.81874305278627e-2
10.700496016278244
1.306079988619333
1.0105357675060997
0.994188218965991
1.8719149336462086
-1.802104678175969
1.3547952390058997
1.6863315414298679
0.4044486829589423
0.663151463812041
1.1770322429105757
0.352603150929493
-0.3331634174041469
0.3219528023599839
-4.993059725088589e-2
2.1359810727430015
6.7435821150996e-2
0.8152880685205637
1.3355438274268063
0.2584888411640408
2.1807115216184654
1.6447860615623693
1.3607208126185033
-0.7821045197212251
0.7038349029306756
2.1151467870965206
6.485239215670013e-4
0.7343141400907413
1.481231890529832
0.7436383571380679
0.3623319184166701
1.631447713261156
1.9006969436568095
1.4157080835736964
1.1280983721670481
0.9426963726981692
0.24856063619656132
2.255628154607799
2.252115121208546
0.7841223760628484
2.3421293755672297
2.8348182486914295
1.7318785148989992
0.9762383915277096
0.2616760398200866
-2.0755223554176574
0.8670675571509455
2.396557755329019
1.1056101876800475
0.2021889628502218
1.1500629266145144
-5.934104215058822e-2
1.3810015349944087
1.4678912177406023
2.673868252550129
1.124862297586977
1.1197692140235738
0.5797093329389853
0.6109162599915521
1.494308943924204
1.1433812164404558
-3.285316146399686e-2
0.6761399530014083
1.1616013112893129
0.36992246959386677
-0.22957870942980962
1.8420697014408531
1.668302982091929
-0.9984398554534477
1.273607856169444
1.050632908723346
3.256210339328564
1.0280169417144063
0.25615873705323133
1.7462784369850581
1.062093713031747
1.2932300520227344
-0.7938447295027886
-0.7609148389003666
1.4574415363357067
2.135798938708896
2.348280132734578
0.8734588034850551
9.973604957378543
0.35196987232964316
0.9035385778892631
2.831102216825725
1.0048753977495803
0.10451954987326095
1.1013729655881508
1.7585193090606088
1.226418453772686
1.4531562937415308
0.7619165788941504
0.5461009604744256
0.6008649427727283
1.3780057162734836
1.4622978840755851
0.34759263562314335
0.5736485977723396
2.3452708258883352
0.9744585521015179
1.285268161746166
3.3071183043196593
2.5080380549754593
1.4259274788963952
-2.5903667885951664e-2
10.986550512370714
-7.581817475272645e-2
0.6416173679276148
0.9534229122939114
2.132841239825161
0.6563712251065035
0.5765226219933468
1.5821498064982138
0.8155216333285705
2.405818410234607
2.1354048597660134
0.1630525718563236
0.8804014022168887
1.7677390861016287
-8.660285888231711e-2
1.4957055199119156
1.1686901432590528
1.230599828613972
0.8291351676730639
0.16369805405490445
1.7141910766526713
-6.948651912105075e-2
1.1025296196845764
2.085963027757715
2.3289725923192126
9.123696930094438e-2
0.7863635245315979
1.2210644347049109
2.536308464119898
0.7390652688383932
1.7955412008455056
-0.4063994398735684
0.417663857272753
0.7572268352036777
2.2956618964137903
0.8643802079825195
0.6085757743161285
0.8861852158953121
0.20486279885097103
1.6421903961418307
-1.4786102936414443
1.61151014329611
1.458781222964441
0.9231917785415031
2.987811838778618
-0.11728923230585608
-0.5667637111213064
0.6043249745554822
0.5250792304891077
1.7590856628461462
2.9472816508826574
-0.5091218022791495
0.2743637223565212
0.7727501145356438
1.402853616121213
1.7554959619234003
0.7230708651518659
-0.6084691576920431
0.17482707245078832
2.067354421609911
0.7901599165913906
1.652492308966036e-2
2.2069354262628993
0.5663855120607604
1.3454449336721601
2.2789771861802697
-0.22026555210551368
2.0631465437978855
2.039163346272715
-1.055015394694371
1.7388420238468236
1.872637961652257
3.2010804478144554
1.3562671867479166
-0.32950571604262713
-0.12503275455021545
1.277154825764409
0.8843613626221687
1.0729763052881904
2.3265082147700946
1.396457159161225
2.0285111744592372
1.330964559152572
2.2179887039102653
0.6284429614059523
0.3504227161829304
1.059416715157096
2.138857533311191
0.8094290918556053
0.46138825363082825
1.7155270828145763
1.2795402627927337
1.923106069471253
1.5104912344606418
0.5207525194340703
0.9946842162070768
3.5622068453996514e-2
0.18537526816937433
0.6111284963963912
0.8203863356928468
2.6753862253484195
1.650835985592774
1.5665183929170006
1.8194416800691129
2.0526203700510672
-0.651338434409015
1.7895684845926731
2.418781611162374
1.0787214433473757
0.18911021430460961
-0.2730870440651234
1.1132512450740597
1.0791906170600118
0.655348601135614
1.8662289569576362
1.627670496605692
11.004554142758568
0.1965392063488054
1.544545432892698
1.5832867870520477
1.8981385396481731
2.8183382851231302
0.5740375808180014
-0.22669316378819526
-0.24362648940070186
0.6735629913863805
2.602135296264379
8.790069946983575
2.5322332152585725
3.3409503348064518
0.687556087265665
3.0425649254546347
2.303391429846904
1.4423838621149456
1.4532350387273563
1.4533824084502533
3.587876263951717e-2
2.20315841028769
0.36244134638228875
0.6105194199863686
-0.45389649712820246
1.0252296817007915
3.1243573699805918
-0.3343748567492124
0.5029922338992889
1.7237601942074183
1.3511410651578275
1.0909840953480212
-0.32919830466100297
2.141237972293073
0.5392802259404192
2.0625972069627694
1.244091480416753
0.5541089423362922
0.6229771138177793
1.6750943089353725
0.6582509099634593
1.3833368320356623
1.5435711785532977
0.522680525295731
1.1915344402925043
0.35720513834862744
-8.565981412975487e-3
1.5372133772727767
5.533905500333114e-2
2.106400343304179
0.9291634254575988
0.85445925115937
1.4490678502266374
2.1619334568272004
0.2576823307255467
3.0652975358670402
1.7957493894493899
1.3815187097537076
0.7289953121054422
0.6648198928488573
2.796805936556927
-0.6218526047970672
3.443106828552411
0.5417960512110482
0.132679330973015
1.682704565079506
-0.9024146545484664
1.7184762053500053
-1.0864413777491877
2.4082299567533854
0.6867344491764446
0.6342469814283839
1.4948735109252287
-9.172542759290137e-3
1.6946662014403582
0.5055706186920743
0.8321247164414739
-1.9229724798355035
2.0668198169520764
0.6061498747704183
1.1542480718044577
0.8061008067230262
3.082304566988994
0.860398339444179
0.8735622227950341
1.645389530143016
0.31526948080501604
1.752922329065294
1.569150260594506
1.7989690892449675
1.8128536428855522
2.174392027870953
2.0629021196034616
2.846679590420721
1.6394381805033087
0.5575890384195411
-6.1635562922423226e-2
0.8770394819777554
0.18278872308418326
0.6653187571193362
0.3097532566309745
1.4422363582612099
2.2343382287346265
1.324757244250018
-0.6265562099516999
0.7530623041134055
2.4548974876953755
1.1505257093180181
0.18314366349299938
1.0957673061461137
1.5898966599318236
-0.8021527506395465
2.9919524601934695
0.16404921511675252
-1.2374220758846475
1.0926015540947187
1.7047203204003867
2.0361007784966425
-1.1129955618534204
1.2078806959684631
1.3823025668814726
-0.20117996650194825
0.4368510042955428
1.548393626209222
0.4462763885045765
0.9980172029792674
0.2244700800716185
-0.21800349448603162
0.16417229737752437
0.6207189588427342
-0.6888898628873894
0.5459693517409819
0.9025080719803394
-0.80260527881833
1.6163270811863857
2.633075725131599
-0.7773305013995544
0.2526721145132672
-0.1366344706945435
0.6144275159404241
1.97335159695082
0.653596005828342
1.6116893597735258
1.2014572106480943
1.5512973474330294
1.1163799230226856
0.7681458989542176
-1.3590749505030857
1.808641901084573
1.360253821157752
1.650932132662064
2.7784418591342397e-2
6.352095139713354e-4
1.441774342046809
1.2027746328482523
2.3897838732271515
-0.716495277439726
1.4815526561707035
0.6060868601377281
1.0576521934236953
0.48936224390584415
1.9354637823092111
0.9117750291708597
1.5859428172890642
0.3265619461022775
1.4075266094882544
0.1875530632757464
3.785747077498869e-2
0.1631893341844134
1.3291278928477839
2.052089759378511
1.669824081897619
2.1070512246301867
0.6995955417782742
0.8007672174475671
0.7544785261517125
0.35717027130006673
0.1475616766180351
1.058691342729173
2.066376270854084
0.7760634597252762
0.8763163198863502
0.7454179518243453
0.4854355686115821
1.5918037585502445
1.0851213695662045
2.0203357822531967
1.7808274985539043
3.3342822479741083
0.5663200540954269
1.2312898243010122
2.441121828813979
1.7131802924620656
1.8828830522974154
1.219662609175296
0.6260480088733502
0.6699112699464786
2.9616463814694187
0.4897205675463613
1.3817814430979267
2.368499575307164
0.7666593765571403
-0.2356306903167824
0.11121972846306671
0.13048302696943748
2.0296555679842934
0.6821008335751151
0.824902101020345
-1.4357280652706677
0.43551196308521123
0.42864594983955717
1.5708973296624738
0.7355576398800922
1.7698063948887048
1.5506318814642417
7.337999234267556e-2
-0.5590435862206975
2.0708510087246363
1.0937426555598604
1.866409592951821
1.4315729492342215
0.4352270706743224
-0.32282708992213793
-0.7926070985572229
4.1313375262139385
2.5182310368999583
1.5517727635341054
1.2112227339558879
1.088870081894894
1.3840364826625247
0.7313598601928168
1.4979622853866332
-9.589043964797783e-2
0.18259590774466605
-0.605015706579229
0.479252514082421
2.446158573827078
0.9738579307442236
0.8806136322907132
-1.208394918011912
-0.14037642146707818
-8.062071408017712e-2
0.325790154240306
0.2876391391859443
-5.4041048579440476e-2
0.8773670649720676
2.367842331811702
0.26889262808832914
1.724119815882517
1.7791641061471468
0.734736381050196
1.6845526251491851
1.621893865768243
2.0312929797085744
0.7494718435492904
1.2022212419280855
1.8619921045777033
0.7029888222716667
1.994083595934665
0.7072489103032473
-1.1377100954267716
2.043095143556785
2.166457075549966
0.5807453858478133
1.4374121122629657
0.954614007781135
1.953034562155581
-0.31599836594918784
0.4068636792746446
1.7729848751693704
0.13604733276520964
1.5733429242522448
-0.24138972935741654
1.9768168747035486
0.8346709933535629
-1.0458047931412482
0.6197520625356908
1.624038733446523
1.2010762746847274
-0.9819289179145085
2.4170778788623286
1.107936607297393
1.3942450780076066
1.3573378818806876
4.1669854205718004e-2
-1.0230668880126963
2.1672936942995924
2.119034877005035
2.617354118940299
0.2064667615861927
-0.5725286096934115
2.1705557671839575
1.074568348781358
1.5117862398041413
1.720961326096484
0.5484584732722113
-0.5487880960214739
0.6889819892464035
1.0619551547481028
1.211942404937342
1.6696784059290148
1.3761977731682555
3.071527103226564
0.8205571465304501
0.27290043128415853
1.0653401225764465
1.1614811977828927
1.7383121079945434
-0.1599579688189814
0.7161844910476673
1.5825851430026225
1.729082954840492
2.9837819396012546
0.6578883627962699
1.1377610134374765
-0.6323774087781868
0.46007501303967147
2.6521967942692486
-6.125168548660631e-2
2.9103478438885473
0.39594395800598337
2.1923535711428848
1.4697826622154868
0.30222917190196086
1.459617110508502
1.5211379317237608
0.5572602929344768
1.6823380276685338
1.5547569617336123
0.967365035197969
0.7566178166538748
2.1650489378269775
2.4253722899093866
10.23186845160902
0.1488435276719955
1.334317069373473
0.13187282300981706
1.9930419173328975
0.1443423696606898
0.6007539040372134
1.886587311852184
0.28914373993637366
1.9776682498960527
2.4758782746217123
2.7070897621357446
0.9472622505602268
1.2602098520570666
-1.7566407142390892
1.5745734897235473
0.6130201020930863
1.0358032358478906
1.5052778526230153
-1.0672425439622724
1.5281373508721496
0.5223419811330235
1.8542151442789314
1.2285178570916262
2.7988645052766676
-0.861627034243382
0.9294699314786086
2.2360036137643347
0.28408155798843726
0.943759844028374
1.4808042306276197
8.976182739712016
3.5904784799076
1.244680688295711
2.015594144111281
1.6682081811950065
1.3062492721042696
0.9597597738929083
0.14081110195632118
0.6802144777826231
1.7427436989859513
1.025119809243448
2.363808744264097
1.098939384496553
2.365900700802084
0.9639713759620857
1.1879608636926524
0.2648582754089548
1.6006017120486895
7.519379906201751e-2
0.2198949605081273
1.5190910869919414
1.3945275248631226
0.651205917924697
0.6910676581885502
2.041288605562429
1.3322502409533927
0.8190849792925787
0.7731687411115308
2.052287561083507
1.8008966716911172
1.231841585551883
1.6898412909890235
1.2644139086012807
2.7531121025849865
0.30461789013034024
0.7788454265121008
1.3219507722927721
-0.45523491060288057
1.2677313887765018
1.7996640101307269
0.4670890255653002
2.4769073718909094
0.3494158442146792
0.22328815419586
0.34206270424196084
-0.12746179104408784
2.197149144795371
-0.6353823443308804
1.0875581317673213
0.9444545881585366
1.965966128848457
0.5039931206538372
1.2106483228588576
0.6580705742678696
2.3578096465318747
-0.5066449988185571
1.0762171812349997
1.0356862000259204
-0.6471662591053053
1.8235922438459014
1.0820154631653591
2.3564967540496595
1.6506535694050806
dbutils.fs.rm("/datasets/streamingFilesNormalMixture/",true) // this is to delete the directory before staring a job
res2: Boolean = false
val r = new Random(12345L) // set seed for reproducibility
var a = 0;
// for loop execution to write files to distributed fs
for( a <- 1 to 5){
  // make a DataSet
  val data = sc.parallelize(Vector.fill(100){myMixtureOf2Normals(1.0, 10.0, 0.99, r)}) // 100 samples from mixture
               .coalesce(1) // this is to make sure that we have only one partition per dir
               .toDF.as[(String,Double)]
  val minute = (new java.text.SimpleDateFormat("mm")).format(new java.util.Date())
  val second = (new java.text.SimpleDateFormat("ss")).format(new java.util.Date())
  // write to dbfs
  data.write.mode(SaveMode.Overwrite).csv("/datasets/streamingFilesNormalMixture/" + minute +"_" + second)
  Thread.sleep(5000L) // sleep 5 seconds
}
r: scala.util.Random = scala.util.Random@5704bfda
a: Int = 0
display(dbutils.fs.ls("/datasets/streamingFilesNormalMixture/"))
path name size
dbfs:/datasets/streamingFilesNormalMixture/48_11/ 48_11/ 0.0
dbfs:/datasets/streamingFilesNormalMixture/48_19/ 48_19/ 0.0
dbfs:/datasets/streamingFilesNormalMixture/48_26/ 48_26/ 0.0
dbfs:/datasets/streamingFilesNormalMixture/48_36/ 48_36/ 0.0
dbfs:/datasets/streamingFilesNormalMixture/48_43/ 48_43/ 0.0
display(dbutils.fs.ls("/datasets/streamingFilesNormalMixture/48_43/"))
path name size
dbfs:/datasets/streamingFilesNormalMixture/48_43/_SUCCESS _SUCCESS 0.0
dbfs:/datasets/streamingFilesNormalMixture/48_43/_committed_5911069874541273534 _committed_5911069874541273534 115.0
dbfs:/datasets/streamingFilesNormalMixture/48_43/_started_5911069874541273534 _started_5911069874541273534 0.0
dbfs:/datasets/streamingFilesNormalMixture/48_43/part-00000-tid-5911069874541273534-d96c7c40-0395-40b6-a223-79c4cdb475c8-35610-1-c000.csv part-00000-tid-5911069874541273534-d96c7c40-0395-40b6-a223-79c4cdb475c8-35610-1-c000.csv 4310.0

Take a peek at what was written.

val df_csv = spark.read.option("inferSchema", "true").csv("/datasets/streamingFilesNormalMixture/48_43/*.csv")
df_csv: org.apache.spark.sql.DataFrame = [_c0: string, _c1: double]
df_csv.count() // 100 samples per file
res8: Long = 100
df_csv.show(10,false) // first 10
+-----------------------+--------------------+
|_c0                    |_c1                 |
+-----------------------+--------------------+
|2020-11-16 10:48:42.690|2.0531657985840983  |
|2020-11-16 10:48:42.696|1.7928797637680196  |
|2020-11-16 10:48:42.701|2.9329556976986013  |
|2020-11-16 10:48:42.706|1.1087520027663345  |
|2020-11-16 10:48:42.711|1.2115868818351045  |
|2020-11-16 10:48:42.716|1.9163661519192294  |
|2020-11-16 10:48:42.722|1.6917128257752045  |
|2020-11-16 10:48:42.727|1.0095879056962782  |
|2020-11-16 10:48:42.732|-0.13611276130309613|
|2020-11-16 10:48:42.737|2.2939319088848023  |
+-----------------------+--------------------+
only showing top 10 rows

ScaDaMaLe Course site and book

YouTry

Write a mixture of two random graph models for file streaming later

This could turn into a potential project whereby you develop a framework for simulating a stream of random graphs...

We will use it as a basic simulator for timeseries of network data. This can be extended for specific domains like network security where extra fields can be added for protocols, ports, etc.

The raw ingredients are here... more or less.

Read the code from github

  • https://github.com/apache/spark/blob/master/graphx/src/main/scala/org/apache/spark/graphx/util/GraphGenerators.scala
  • Also check out: https://github.com/graphframes/graphframes/blob/master/src/main/scala/org/graphframes/examples/Graphs.scala

Let's focus on the two of the simplest (deterministic) graphs.

import scala.util.Random

import org.apache.spark.graphx.{Graph, VertexId}
import org.apache.spark.graphx.util.GraphGenerators
import org.apache.spark.sql.functions.lit // import the lit function in sql
import org.graphframes._

/*
// A graph with edge attributes containing distances
val graph: Graph[Long, Double] = GraphGenerators.logNormalGraph(sc, numVertices = 50, seed=12345L).mapEdges { e => 
  // to make things nicer we assign 0 distance to itself
  if (e.srcId == e.dstId) 0.0 else Random.nextDouble()
}
*/

val graph: Graph[(Int,Int), Double] = GraphGenerators.gridGraph(sc, 5,5)
import scala.util.Random
import org.apache.spark.graphx.{Graph, VertexId}
import org.apache.spark.graphx.util.GraphGenerators
import org.apache.spark.sql.functions.lit
import org.graphframes._
graph: org.apache.spark.graphx.Graph[(Int, Int),Double] = org.apache.spark.graphx.impl.GraphImpl@2afd1b5c
val g = GraphFrame.fromGraphX(graph)
val gE= g.edges.select($"src", $"dst".as("dest"), lit(1L).as("count")) // for us the column count is just an edge incidence
g: org.graphframes.GraphFrame = GraphFrame(v:[id: bigint, attr: struct<_1: int, _2: int>], e:[src: bigint, dst: bigint ... 1 more field])
gE: org.apache.spark.sql.DataFrame = [src: bigint, dest: bigint ... 1 more field]
Warning: classes defined within packages cannot be redefined without a cluster restart.
Compilation successful.
d3.graphs.force(
  height = 500,
  width = 500,
  clicks = gE.as[d3.Edge])

val graphStar: Graph[Int, Int] = GraphGenerators.starGraph(sc, 10)
val gS = GraphFrame.fromGraphX(graphStar)
val gSE= gS.edges.select($"src", $"dst".as("dest"), lit(1L).as("count")) // for us the column count is just an edge incidence
d3.graphs.force(
  height = 500,
  width = 500,
  clicks = gSE.as[d3.Edge])

Now, write code to simulate from a mixture of graphs models

  • See 037a_... and 037b_... notebooks for the file writing pattern.
  • First try, grid and star with 98%-2% mixture, respectively
  • Second, try a truly random graph like lognormal degree distributed random graph and star
  • Try to make a simulation of random networks that is closer to your domain of application (you can always drop in to python and R for this part - even using non-distributed algorithms for simulating large enough networks per burst).
val graphGrid: Graph[(Int,Int), Double] = GraphGenerators.gridGraph(sc, 50,50)
val gG = GraphFrame.fromGraphX(graphGrid)
gG.edges.count
graphGrid: org.apache.spark.graphx.Graph[(Int, Int),Double] = org.apache.spark.graphx.impl.GraphImpl@160a43de
gG: org.graphframes.GraphFrame = GraphFrame(v:[id: bigint, attr: struct<_1: int, _2: int>], e:[src: bigint, dst: bigint ... 1 more field])
res10: Long = 4900
val graphStar: Graph[Int, Int] = GraphGenerators.starGraph(sc, 101)
val gS = GraphFrame.fromGraphX(graphStar)
gS.edges.count
graphStar: org.apache.spark.graphx.Graph[Int,Int] = org.apache.spark.graphx.impl.GraphImpl@4395a75
gS: org.graphframes.GraphFrame = GraphFrame(v:[id: bigint, attr: int], e:[src: bigint, dst: bigint ... 1 more field])
res13: Long = 100
val gAllEdges = gS.edges.union(gG.edges)
gAllEdges.count
gAllEdges: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [src: bigint, dst: bigint ... 1 more field]
res16: Long = 5000
100.0/5000.0
res20: Double = 0.02

ScaDaMaLe Course site and book

Overview

Structured Streaming is a scalable and fault-tolerant stream processing engine built on the Spark SQL engine. You can express your streaming computation the same way you would express a batch computation on static data. The Spark SQL engine will take care of running it incrementally and continuously and updating the final result as streaming data continues to arrive. You can use the Dataset/DataFrame API in Scala, Java, Python or R to express streaming aggregations, event-time windows, stream-to-batch joins, etc. The computation is executed on the same optimized Spark SQL engine. Finally, the system ensures end-to-end exactly-once fault-tolerance guarantees through checkpointing and Write Ahead Logs. In short, Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing without the user having to reason about streaming.

In this guide, we are going to walk you through the programming model and the APIs. First, let’s start with a simple example - a streaming word count.

Programming Model

The key idea in Structured Streaming is to treat a live data stream as a table that is being continuously appended. This leads to a new stream processing model that is very similar to a batch processing model. You will express your streaming computation as standard batch-like query as on a static table, and Spark runs it as an incremental query on the unbounded input table. Let’s understand this model in more detail.

Basic Concepts

Consider the input data stream as the “Input Table”. Every data item that is arriving on the stream is like a new row being appended to the Input Table.

Stream as a Table

A query on the input will generate the “Result Table”. Every trigger interval (say, every 1 second), new rows get appended to the Input Table, which eventually updates the Result Table. Whenever the result table gets updated, we would want to write the changed result rows to an external sink.

Model

The “Output” is defined as what gets written out to the external storage. The output can be defined in a different mode:

  • Complete Mode - The entire updated Result Table will be written to the external storage. It is up to the storage connector to decide how to handle writing of the entire table.

  • Append Mode - Only the new rows appended in the Result Table since the last trigger will be written to the external storage. This is applicable only on the queries where existing rows in the Result Table are not expected to change.

  • Update Mode - Only the rows that were updated in the Result Table since the last trigger will be written to the external storage (available since Spark 2.1.1). Note that this is different from the Complete Mode in that this mode only outputs the rows that have changed since the last trigger. If the query doesn’t contain aggregations, it will be equivalent to Append mode.

Note that each mode is applicable on certain types of queries. This is discussed in detail later on output-modes. To illustrate the use of this model, let’s understand the model in context of the Quick Example above.

The first streamingLines DataFrame is the input table, and the final wordCounts DataFrame is the result table. Note that the query on streamingLines DataFrame to generate wordCounts is exactly the same as it would be a static DataFrame. However, when this query is started, Spark will continuously check for new data from the directory. If there is new data, Spark will run an “incremental” query that combines the previous running counts with the new data to compute updated counts, as shown below.

Model

This model is significantly different from many other stream processing engines. Many streaming systems require the user to maintain running aggregations themselves, thus having to reason about fault-tolerance, and data consistency (at-least-once, or at-most-once, or exactly-once). In this model, Spark is responsible for updating the Result Table when there is new data, thus relieving the users from reasoning about it. As an example, let’s see how this model handles event-time based processing and late arriving data.

Quick Example

Let’s say you want to maintain a running word count of text data received from a file writer that is writing files into a directory datasets/streamingFiles in the distributed file system. Let’s see how you can express this using Structured Streaming.

Let’s walk through the example step-by-step and understand how it works.

First we need to start a file writing job in the companion notebook 037a_AnimalNamesStructStreamingFiles and then return here.

display(dbutils.fs.ls("/datasets/streamingFiles"))
path name size
dbfs:/datasets/streamingFiles/20_10.log 20_10.log 35.0
dbfs:/datasets/streamingFiles/20_12.log 20_12.log 35.0
dbfs:/datasets/streamingFiles/20_14.log 20_14.log 35.0
dbfs:/datasets/streamingFiles/20_16.log 20_16.log 35.0
dbfs:/datasets/streamingFiles/20_18.log 20_18.log 35.0
dbfs:/datasets/streamingFiles/20_20.log 20_20.log 35.0
dbfs:/datasets/streamingFiles/20_22.log 20_22.log 35.0
dbfs:/datasets/streamingFiles/20_24.log 20_24.log 35.0
dbfs:/datasets/streamingFiles/20_26.log 20_26.log 35.0
dbfs:/datasets/streamingFiles/20_28.log 20_28.log 35.0
dbfs:/datasets/streamingFiles/20_30.log 20_30.log 35.0
dbfs:/datasets/streamingFiles/20_32.log 20_32.log 35.0
dbfs:/datasets/streamingFiles/20_34.log 20_34.log 35.0
dbfs:/datasets/streamingFiles/20_36.log 20_36.log 35.0
dbfs:/datasets/streamingFiles/20_38.log 20_38.log 35.0
dbfs:/datasets/streamingFiles/20_40.log 20_40.log 35.0
dbfs:/datasets/streamingFiles/20_42.log 20_42.log 35.0
dbfs:/datasets/streamingFiles/20_44.log 20_44.log 35.0
dbfs:/datasets/streamingFiles/20_46.log 20_46.log 35.0
dbfs:/datasets/streamingFiles/20_47.log 20_47.log 35.0
dbfs:/datasets/streamingFiles/20_48.log 20_48.log 35.0
dbfs:/datasets/streamingFiles/20_49.log 20_49.log 35.0
dbfs:/datasets/streamingFiles/20_50.log 20_50.log 35.0
dbfs:/datasets/streamingFiles/20_51.log 20_51.log 35.0
dbfs:/datasets/streamingFiles/20_52.log 20_52.log 35.0
dbfs:/datasets/streamingFiles/20_53.log 20_53.log 35.0
dbfs:/datasets/streamingFiles/20_54.log 20_54.log 35.0
dbfs:/datasets/streamingFiles/20_55.log 20_55.log 35.0
dbfs:/datasets/streamingFiles/20_56.log 20_56.log 35.0
dbfs:/datasets/streamingFiles/20_57.log 20_57.log 35.0
dbfs:/datasets/streamingFiles/20_58.log 20_58.log 35.0
dbfs:/datasets/streamingFiles/20_59.log 20_59.log 35.0
dbfs:/datasets/streamingFiles/21_00.log 21_00.log 35.0
dbfs:/datasets/streamingFiles/21_01.log 21_01.log 35.0
dbfs:/datasets/streamingFiles/21_02.log 21_02.log 35.0
dbfs:/datasets/streamingFiles/21_03.log 21_03.log 35.0
dbfs:/datasets/streamingFiles/21_04.log 21_04.log 35.0
dbfs:/datasets/streamingFiles/21_05.log 21_05.log 35.0
dbfs:/datasets/streamingFiles/21_06.log 21_06.log 35.0
dbfs:/datasets/streamingFiles/21_07.log 21_07.log 35.0
dbfs:/datasets/streamingFiles/21_08.log 21_08.log 35.0
dbfs:/datasets/streamingFiles/21_09.log 21_09.log 35.0
dbfs:/datasets/streamingFiles/21_10.log 21_10.log 35.0
dbfs:/datasets/streamingFiles/21_11.log 21_11.log 35.0
dbfs:/datasets/streamingFiles/21_12.log 21_12.log 35.0
dbfs:/datasets/streamingFiles/21_13.log 21_13.log 35.0
dbfs:/datasets/streamingFiles/21_14.log 21_14.log 35.0
dbfs:/datasets/streamingFiles/21_15.log 21_15.log 35.0
dbfs:/datasets/streamingFiles/21_16.log 21_16.log 35.0
dbfs:/datasets/streamingFiles/21_17.log 21_17.log 35.0
dbfs:/datasets/streamingFiles/21_18.log 21_18.log 35.0
dbfs:/datasets/streamingFiles/21_19.log 21_19.log 35.0
dbfs:/datasets/streamingFiles/21_20.log 21_20.log 35.0
dbfs:/datasets/streamingFiles/21_21.log 21_21.log 35.0
dbfs:/datasets/streamingFiles/21_22.log 21_22.log 35.0
dbfs:/datasets/streamingFiles/21_23.log 21_23.log 35.0
dbfs:/datasets/streamingFiles/21_24.log 21_24.log 35.0
dbfs:/datasets/streamingFiles/21_25.log 21_25.log 35.0
dbfs:/datasets/streamingFiles/21_26.log 21_26.log 35.0
dbfs:/datasets/streamingFiles/21_27.log 21_27.log 35.0
dbfs:/datasets/streamingFiles/21_28.log 21_28.log 35.0
dbfs:/datasets/streamingFiles/21_29.log 21_29.log 35.0
dbfs:/datasets/streamingFiles/21_30.log 21_30.log 35.0
dbfs:/datasets/streamingFiles/21_31.log 21_31.log 35.0
dbfs:/datasets/streamingFiles/21_32.log 21_32.log 35.0
dbfs:/datasets/streamingFiles/21_33.log 21_33.log 35.0
dbfs:/datasets/streamingFiles/21_34.log 21_34.log 35.0
dbfs:/datasets/streamingFiles/21_35.log 21_35.log 35.0
dbfs:/datasets/streamingFiles/21_36.log 21_36.log 35.0
dbfs:/datasets/streamingFiles/21_37.log 21_37.log 35.0
dbfs:/datasets/streamingFiles/21_38.log 21_38.log 35.0
dbfs:/datasets/streamingFiles/21_39.log 21_39.log 35.0
dbfs:/datasets/streamingFiles/21_40.log 21_40.log 35.0
dbfs:/datasets/streamingFiles/21_41.log 21_41.log 35.0
dbfs:/datasets/streamingFiles/21_42.log 21_42.log 35.0
dbfs:/datasets/streamingFiles/21_43.log 21_43.log 35.0
dbfs:/datasets/streamingFiles/21_44.log 21_44.log 35.0
dbfs:/datasets/streamingFiles/21_45.log 21_45.log 35.0
dbfs:/datasets/streamingFiles/21_46.log 21_46.log 35.0
dbfs:/datasets/streamingFiles/21_47.log 21_47.log 35.0
dbfs:/datasets/streamingFiles/21_48.log 21_48.log 35.0
dbfs:/datasets/streamingFiles/21_49.log 21_49.log 35.0
dbfs:/datasets/streamingFiles/21_50.log 21_50.log 35.0
dbfs:/datasets/streamingFiles/21_51.log 21_51.log 35.0
dbfs:/datasets/streamingFiles/21_52.log 21_52.log 35.0
dbfs:/datasets/streamingFiles/21_53.log 21_53.log 35.0
dbfs:/datasets/streamingFiles/21_54.log 21_54.log 35.0
dbfs:/datasets/streamingFiles/21_55.log 21_55.log 35.0
dbfs:/datasets/streamingFiles/21_56.log 21_56.log 35.0
dbfs:/datasets/streamingFiles/21_57.log 21_57.log 35.0
dbfs:/datasets/streamingFiles/21_58.log 21_58.log 35.0
dbfs:/datasets/streamingFiles/21_59.log 21_59.log 35.0
dbfs:/datasets/streamingFiles/22_00.log 22_00.log 35.0
dbfs:/datasets/streamingFiles/22_01.log 22_01.log 35.0
dbfs:/datasets/streamingFiles/22_02.log 22_02.log 35.0
dbfs:/datasets/streamingFiles/22_03.log 22_03.log 35.0
dbfs:/datasets/streamingFiles/22_04.log 22_04.log 35.0
dbfs:/datasets/streamingFiles/22_05.log 22_05.log 35.0
dbfs:/datasets/streamingFiles/22_06.log 22_06.log 35.0
dbfs:/datasets/streamingFiles/22_07.log 22_07.log 35.0
dbfs:/datasets/streamingFiles/22_08.log 22_08.log 35.0
dbfs:/datasets/streamingFiles/22_09.log 22_09.log 35.0
dbfs:/datasets/streamingFiles/22_10.log 22_10.log 35.0
dbfs:/datasets/streamingFiles/22_11.log 22_11.log 35.0
dbfs:/datasets/streamingFiles/22_12.log 22_12.log 35.0
dbfs:/datasets/streamingFiles/22_13.log 22_13.log 35.0
dbfs:/datasets/streamingFiles/22_14.log 22_14.log 35.0
dbfs:/datasets/streamingFiles/22_15.log 22_15.log 35.0
dbfs:/datasets/streamingFiles/22_16.log 22_16.log 35.0
dbfs:/datasets/streamingFiles/22_17.log 22_17.log 35.0
dbfs:/datasets/streamingFiles/22_18.log 22_18.log 35.0
dbfs:/datasets/streamingFiles/22_19.log 22_19.log 35.0
dbfs:/datasets/streamingFiles/22_20.log 22_20.log 35.0
dbfs:/datasets/streamingFiles/22_21.log 22_21.log 35.0
dbfs:/datasets/streamingFiles/22_22.log 22_22.log 35.0
dbfs:/datasets/streamingFiles/22_23.log 22_23.log 35.0
dbfs:/datasets/streamingFiles/22_24.log 22_24.log 35.0
dbfs:/datasets/streamingFiles/22_25.log 22_25.log 35.0
dbfs:/datasets/streamingFiles/22_26.log 22_26.log 35.0
dbfs:/datasets/streamingFiles/22_27.log 22_27.log 35.0
dbfs:/datasets/streamingFiles/22_28.log 22_28.log 35.0
dbfs:/datasets/streamingFiles/22_29.log 22_29.log 35.0
dbfs:/datasets/streamingFiles/22_30.log 22_30.log 35.0
dbfs:/datasets/streamingFiles/22_31.log 22_31.log 35.0
dbfs:/datasets/streamingFiles/22_32.log 22_32.log 35.0
dbfs:/datasets/streamingFiles/22_33.log 22_33.log 35.0
dbfs:/datasets/streamingFiles/22_34.log 22_34.log 35.0
dbfs:/datasets/streamingFiles/22_35.log 22_35.log 35.0
dbfs:/datasets/streamingFiles/22_36.log 22_36.log 35.0
dbfs:/datasets/streamingFiles/22_37.log 22_37.log 35.0
dbfs:/datasets/streamingFiles/22_38.log 22_38.log 35.0
dbfs:/datasets/streamingFiles/22_39.log 22_39.log 35.0
dbfs:/datasets/streamingFiles/22_40.log 22_40.log 35.0
dbfs:/datasets/streamingFiles/22_41.log 22_41.log 35.0
dbfs:/datasets/streamingFiles/22_42.log 22_42.log 35.0
dbfs:/datasets/streamingFiles/22_43.log 22_43.log 35.0
dbfs:/datasets/streamingFiles/22_44.log 22_44.log 35.0
dbfs:/datasets/streamingFiles/22_45.log 22_45.log 35.0
dbfs:/datasets/streamingFiles/22_46.log 22_46.log 35.0
dbfs:/datasets/streamingFiles/22_47.log 22_47.log 35.0
dbfs:/datasets/streamingFiles/22_48.log 22_48.log 35.0
dbfs:/datasets/streamingFiles/22_49.log 22_49.log 35.0
dbfs:/datasets/streamingFiles/22_50.log 22_50.log 35.0
dbfs:/datasets/streamingFiles/22_51.log 22_51.log 35.0
dbfs:/datasets/streamingFiles/22_52.log 22_52.log 35.0
dbfs:/datasets/streamingFiles/22_53.log 22_53.log 35.0
dbfs:/datasets/streamingFiles/22_54.log 22_54.log 35.0
dbfs:/datasets/streamingFiles/22_55.log 22_55.log 35.0
dbfs:/datasets/streamingFiles/22_56.log 22_56.log 35.0
dbfs:/datasets/streamingFiles/22_57.log 22_57.log 35.0
dbfs:/datasets/streamingFiles/22_58.log 22_58.log 35.0
dbfs:/datasets/streamingFiles/22_59.log 22_59.log 35.0
dbfs:/datasets/streamingFiles/23_00.log 23_00.log 35.0
dbfs:/datasets/streamingFiles/23_01.log 23_01.log 35.0
dbfs:/datasets/streamingFiles/23_02.log 23_02.log 35.0
dbfs:/datasets/streamingFiles/23_03.log 23_03.log 35.0
dbfs:/datasets/streamingFiles/23_04.log 23_04.log 35.0
dbfs:/datasets/streamingFiles/23_05.log 23_05.log 35.0
dbfs:/datasets/streamingFiles/23_06.log 23_06.log 35.0
dbfs:/datasets/streamingFiles/23_07.log 23_07.log 35.0
dbfs:/datasets/streamingFiles/23_08.log 23_08.log 35.0
dbfs:/datasets/streamingFiles/23_09.log 23_09.log 35.0
dbfs:/datasets/streamingFiles/23_10.log 23_10.log 35.0
dbfs:/datasets/streamingFiles/23_11.log 23_11.log 35.0
dbfs:/datasets/streamingFiles/23_12.log 23_12.log 35.0
dbfs:/datasets/streamingFiles/23_13.log 23_13.log 35.0
dbfs:/datasets/streamingFiles/23_14.log 23_14.log 35.0
dbfs:/datasets/streamingFiles/23_15.log 23_15.log 35.0
dbfs:/datasets/streamingFiles/23_16.log 23_16.log 35.0
dbfs:/datasets/streamingFiles/23_17.log 23_17.log 35.0
dbfs:/datasets/streamingFiles/23_18.log 23_18.log 35.0
dbfs:/datasets/streamingFiles/23_19.log 23_19.log 35.0
dbfs:/datasets/streamingFiles/23_20.log 23_20.log 35.0
dbfs:/datasets/streamingFiles/23_21.log 23_21.log 35.0
dbfs:/datasets/streamingFiles/23_22.log 23_22.log 35.0
dbfs:/datasets/streamingFiles/23_23.log 23_23.log 35.0
dbfs:/datasets/streamingFiles/23_24.log 23_24.log 35.0
dbfs:/datasets/streamingFiles/23_25.log 23_25.log 35.0
dbfs:/datasets/streamingFiles/23_26.log 23_26.log 35.0
dbfs:/datasets/streamingFiles/23_27.log 23_27.log 35.0
dbfs:/datasets/streamingFiles/23_28.log 23_28.log 35.0
dbfs:/datasets/streamingFiles/23_29.log 23_29.log 35.0
dbfs:/datasets/streamingFiles/23_30.log 23_30.log 35.0
dbfs:/datasets/streamingFiles/23_31.log 23_31.log 35.0
dbfs:/datasets/streamingFiles/23_32.log 23_32.log 35.0
dbfs:/datasets/streamingFiles/23_33.log 23_33.log 35.0
dbfs:/datasets/streamingFiles/23_34.log 23_34.log 35.0
dbfs:/datasets/streamingFiles/23_35.log 23_35.log 35.0
dbfs:/datasets/streamingFiles/23_36.log 23_36.log 35.0
dbfs:/datasets/streamingFiles/23_37.log 23_37.log 35.0
dbfs:/datasets/streamingFiles/23_38.log 23_38.log 35.0
dbfs:/datasets/streamingFiles/23_39.log 23_39.log 35.0
dbfs:/datasets/streamingFiles/23_40.log 23_40.log 35.0
dbfs:/datasets/streamingFiles/23_41.log 23_41.log 35.0
dbfs:/datasets/streamingFiles/23_42.log 23_42.log 35.0
dbfs:/datasets/streamingFiles/23_43.log 23_43.log 35.0
dbfs:/datasets/streamingFiles/23_44.log 23_44.log 35.0
dbfs:/datasets/streamingFiles/23_45.log 23_45.log 35.0
dbfs:/datasets/streamingFiles/23_46.log 23_46.log 35.0
dbfs:/datasets/streamingFiles/23_47.log 23_47.log 35.0
dbfs:/datasets/streamingFiles/23_48.log 23_48.log 35.0
dbfs:/datasets/streamingFiles/23_49.log 23_49.log 35.0
dbfs:/datasets/streamingFiles/23_50.log 23_50.log 35.0
dbfs:/datasets/streamingFiles/23_51.log 23_51.log 35.0
dbfs:/datasets/streamingFiles/23_52.log 23_52.log 35.0
dbfs:/datasets/streamingFiles/23_53.log 23_53.log 35.0
dbfs:/datasets/streamingFiles/23_54.log 23_54.log 35.0
dbfs:/datasets/streamingFiles/23_55.log 23_55.log 35.0
dbfs:/datasets/streamingFiles/23_56.log 23_56.log 35.0
dbfs:/datasets/streamingFiles/23_57.log 23_57.log 35.0
dbfs:/datasets/streamingFiles/23_58.log 23_58.log 35.0
dbfs:/datasets/streamingFiles/23_59.log 23_59.log 35.0
dbfs:/datasets/streamingFiles/24_00.log 24_00.log 35.0
dbfs:/datasets/streamingFiles/24_01.log 24_01.log 35.0
dbfs:/datasets/streamingFiles/24_02.log 24_02.log 35.0
dbfs:/datasets/streamingFiles/24_03.log 24_03.log 35.0
dbfs:/datasets/streamingFiles/24_04.log 24_04.log 35.0
dbfs:/datasets/streamingFiles/24_05.log 24_05.log 35.0
dbfs:/datasets/streamingFiles/24_06.log 24_06.log 35.0
dbfs:/datasets/streamingFiles/24_07.log 24_07.log 35.0
dbfs:/datasets/streamingFiles/24_08.log 24_08.log 35.0
dbfs:/datasets/streamingFiles/24_09.log 24_09.log 35.0
dbfs:/datasets/streamingFiles/24_10.log 24_10.log 35.0
dbfs:/datasets/streamingFiles/24_11.log 24_11.log 35.0
dbfs:/datasets/streamingFiles/24_12.log 24_12.log 35.0
dbfs:/datasets/streamingFiles/24_13.log 24_13.log 35.0
dbfs:/datasets/streamingFiles/24_14.log 24_14.log 35.0
dbfs:/datasets/streamingFiles/24_15.log 24_15.log 35.0
dbfs:/datasets/streamingFiles/24_16.log 24_16.log 35.0
dbfs:/datasets/streamingFiles/24_17.log 24_17.log 35.0
dbfs:/datasets/streamingFiles/24_18.log 24_18.log 35.0
dbfs:/datasets/streamingFiles/24_19.log 24_19.log 35.0
dbfs:/datasets/streamingFiles/24_20.log 24_20.log 35.0
dbfs:/datasets/streamingFiles/24_21.log 24_21.log 35.0
dbfs:/datasets/streamingFiles/24_22.log 24_22.log 35.0
dbfs:/datasets/streamingFiles/24_23.log 24_23.log 35.0
dbfs:/datasets/streamingFiles/24_24.log 24_24.log 35.0
dbfs:/datasets/streamingFiles/24_25.log 24_25.log 35.0
dbfs:/datasets/streamingFiles/24_27.log 24_27.log 35.0
dbfs:/datasets/streamingFiles/24_29.log 24_29.log 35.0
dbfs:/datasets/streamingFiles/24_31.log 24_31.log 35.0
dbfs:/datasets/streamingFiles/24_33.log 24_33.log 35.0
dbfs:/datasets/streamingFiles/24_35.log 24_35.log 35.0
dbfs:/datasets/streamingFiles/24_37.log 24_37.log 35.0
dbfs:/datasets/streamingFiles/24_39.log 24_39.log 35.0
dbfs:/datasets/streamingFiles/24_41.log 24_41.log 35.0
dbfs:/datasets/streamingFiles/24_43.log 24_43.log 35.0
dbfs:/datasets/streamingFiles/24_45.log 24_45.log 35.0
dbfs:/datasets/streamingFiles/24_47.log 24_47.log 35.0
dbfs:/datasets/streamingFiles/24_49.log 24_49.log 35.0
dbfs:/datasets/streamingFiles/24_50.log 24_50.log 35.0
dbfs:/datasets/streamingFiles/24_51.log 24_51.log 35.0
dbfs:/datasets/streamingFiles/24_53.log 24_53.log 35.0
dbfs:/datasets/streamingFiles/24_55.log 24_55.log 35.0
dbfs:/datasets/streamingFiles/24_57.log 24_57.log 35.0
dbfs:/datasets/streamingFiles/24_59.log 24_59.log 35.0
dbfs:/datasets/streamingFiles/25_01.log 25_01.log 35.0
dbfs:/datasets/streamingFiles/25_03.log 25_03.log 35.0
dbfs:/datasets/streamingFiles/25_05.log 25_05.log 35.0
dbfs:/datasets/streamingFiles/25_07.log 25_07.log 35.0
dbfs:/datasets/streamingFiles/25_09.log 25_09.log 35.0
dbfs:/datasets/streamingFiles/25_11.log 25_11.log 35.0
dbfs:/datasets/streamingFiles/25_13.log 25_13.log 35.0
dbfs:/datasets/streamingFiles/25_15.log 25_15.log 35.0
dbfs:/datasets/streamingFiles/25_17.log 25_17.log 35.0
dbfs:/datasets/streamingFiles/25_19.log 25_19.log 35.0
dbfs:/datasets/streamingFiles/25_21.log 25_21.log 35.0
dbfs:/datasets/streamingFiles/25_23.log 25_23.log 35.0
dbfs:/datasets/streamingFiles/25_25.log 25_25.log 35.0
dbfs:/datasets/streamingFiles/25_27.log 25_27.log 35.0
dbfs:/datasets/streamingFiles/25_29.log 25_29.log 35.0
dbfs:/datasets/streamingFiles/25_31.log 25_31.log 35.0
dbfs:/datasets/streamingFiles/25_33.log 25_33.log 35.0
dbfs:/datasets/streamingFiles/25_35.log 25_35.log 35.0
dbfs:/datasets/streamingFiles/25_37.log 25_37.log 35.0
dbfs:/datasets/streamingFiles/25_39.log 25_39.log 35.0
dbfs:/datasets/streamingFiles/25_41.log 25_41.log 35.0
dbfs:/datasets/streamingFiles/25_43.log 25_43.log 35.0
dbfs:/datasets/streamingFiles/25_45.log 25_45.log 35.0
dbfs:/datasets/streamingFiles/25_47.log 25_47.log 35.0
dbfs:/datasets/streamingFiles/25_49.log 25_49.log 35.0
dbfs:/datasets/streamingFiles/25_51.log 25_51.log 35.0
dbfs:/datasets/streamingFiles/25_53.log 25_53.log 35.0
dbfs:/datasets/streamingFiles/25_55.log 25_55.log 35.0
dbfs:/datasets/streamingFiles/25_57.log 25_57.log 35.0
dbfs:/datasets/streamingFiles/25_59.log 25_59.log 35.0
dbfs:/datasets/streamingFiles/26_01.log 26_01.log 35.0
dbfs:/datasets/streamingFiles/26_03.log 26_03.log 35.0
dbfs:/datasets/streamingFiles/26_05.log 26_05.log 35.0
dbfs:/datasets/streamingFiles/26_07.log 26_07.log 35.0
dbfs:/datasets/streamingFiles/26_09.log 26_09.log 35.0
dbfs:/datasets/streamingFiles/26_11.log 26_11.log 35.0
dbfs:/datasets/streamingFiles/26_13.log 26_13.log 35.0
dbfs:/datasets/streamingFiles/26_15.log 26_15.log 35.0
dbfs:/datasets/streamingFiles/26_17.log 26_17.log 35.0
dbfs:/datasets/streamingFiles/26_19.log 26_19.log 35.0
dbfs:/datasets/streamingFiles/26_21.log 26_21.log 35.0
dbfs:/datasets/streamingFiles/26_23.log 26_23.log 35.0
dbfs:/datasets/streamingFiles/26_25.log 26_25.log 35.0
dbfs:/datasets/streamingFiles/26_27.log 26_27.log 35.0
dbfs:/datasets/streamingFiles/26_29.log 26_29.log 35.0
dbfs:/datasets/streamingFiles/26_31.log 26_31.log 35.0
dbfs:/datasets/streamingFiles/26_33.log 26_33.log 35.0
dbfs:/datasets/streamingFiles/26_35.log 26_35.log 35.0
dbfs:/datasets/streamingFiles/26_37.log 26_37.log 35.0
dbfs:/datasets/streamingFiles/26_39.log 26_39.log 35.0
dbfs:/datasets/streamingFiles/26_41.log 26_41.log 35.0
dbfs:/datasets/streamingFiles/26_43.log 26_43.log 35.0
dbfs:/datasets/streamingFiles/26_45.log 26_45.log 35.0
dbfs:/datasets/streamingFiles/26_47.log 26_47.log 35.0
dbfs:/datasets/streamingFiles/26_49.log 26_49.log 35.0
dbfs:/datasets/streamingFiles/26_51.log 26_51.log 35.0
dbfs:/datasets/streamingFiles/26_53.log 26_53.log 35.0
dbfs:/datasets/streamingFiles/26_55.log 26_55.log 35.0
dbfs:/datasets/streamingFiles/26_57.log 26_57.log 35.0
dbfs:/datasets/streamingFiles/26_59.log 26_59.log 35.0
dbfs:/datasets/streamingFiles/27_01.log 27_01.log 35.0
dbfs:/datasets/streamingFiles/27_03.log 27_03.log 35.0
dbfs:/datasets/streamingFiles/27_05.log 27_05.log 35.0
dbfs:/datasets/streamingFiles/27_07.log 27_07.log 35.0
dbfs:/datasets/streamingFiles/27_09.log 27_09.log 35.0
dbfs:/datasets/streamingFiles/27_11.log 27_11.log 35.0
dbfs:/datasets/streamingFiles/27_13.log 27_13.log 35.0
dbfs:/datasets/streamingFiles/27_15.log 27_15.log 35.0
dbfs:/datasets/streamingFiles/27_17.log 27_17.log 35.0
dbfs:/datasets/streamingFiles/27_19.log 27_19.log 35.0
dbfs:/datasets/streamingFiles/27_21.log 27_21.log 35.0
dbfs:/datasets/streamingFiles/27_23.log 27_23.log 35.0
dbfs:/datasets/streamingFiles/27_25.log 27_25.log 35.0
dbfs:/datasets/streamingFiles/27_27.log 27_27.log 35.0
dbfs:/datasets/streamingFiles/27_29.log 27_29.log 35.0
dbfs:/datasets/streamingFiles/27_31.log 27_31.log 35.0
dbfs:/datasets/streamingFiles/27_33.log 27_33.log 35.0
dbfs:/datasets/streamingFiles/27_35.log 27_35.log 35.0
dbfs:/datasets/streamingFiles/27_37.log 27_37.log 35.0
dbfs:/datasets/streamingFiles/27_39.log 27_39.log 35.0
dbfs:/datasets/streamingFiles/27_41.log 27_41.log 35.0
dbfs:/datasets/streamingFiles/27_43.log 27_43.log 35.0
dbfs:/datasets/streamingFiles/27_45.log 27_45.log 35.0
dbfs:/datasets/streamingFiles/27_47.log 27_47.log 35.0
dbfs:/datasets/streamingFiles/27_50.log 27_50.log 35.0
dbfs:/datasets/streamingFiles/27_52.log 27_52.log 35.0
dbfs:/datasets/streamingFiles/27_54.log 27_54.log 35.0
dbfs:/datasets/streamingFiles/27_56.log 27_56.log 35.0
dbfs:/datasets/streamingFiles/27_58.log 27_58.log 35.0
dbfs:/datasets/streamingFiles/28_00.log 28_00.log 35.0
dbfs:/datasets/streamingFiles/28_02.log 28_02.log 35.0
dbfs:/datasets/streamingFiles/28_04.log 28_04.log 35.0
dbfs:/datasets/streamingFiles/28_06.log 28_06.log 35.0
dbfs:/datasets/streamingFiles/28_08.log 28_08.log 35.0
dbfs:/datasets/streamingFiles/28_10.log 28_10.log 35.0
dbfs:/datasets/streamingFiles/28_12.log 28_12.log 35.0
dbfs:/datasets/streamingFiles/28_14.log 28_14.log 35.0
dbfs:/datasets/streamingFiles/28_16.log 28_16.log 35.0
dbfs:/datasets/streamingFiles/28_18.log 28_18.log 35.0
dbfs:/datasets/streamingFiles/28_20.log 28_20.log 35.0
dbfs:/datasets/streamingFiles/28_22.log 28_22.log 35.0
dbfs:/datasets/streamingFiles/28_24.log 28_24.log 35.0
dbfs:/datasets/streamingFiles/28_26.log 28_26.log 35.0
dbfs:/datasets/streamingFiles/28_28.log 28_28.log 35.0
dbfs:/datasets/streamingFiles/28_30.log 28_30.log 35.0
dbfs:/datasets/streamingFiles/28_32.log 28_32.log 35.0
dbfs:/datasets/streamingFiles/28_34.log 28_34.log 35.0
dbfs:/datasets/streamingFiles/28_36.log 28_36.log 35.0
dbfs:/datasets/streamingFiles/28_38.log 28_38.log 35.0
dbfs:/datasets/streamingFiles/28_40.log 28_40.log 35.0
dbfs:/datasets/streamingFiles/28_42.log 28_42.log 35.0
dbfs:/datasets/streamingFiles/28_44.log 28_44.log 35.0
dbfs:/datasets/streamingFiles/28_46.log 28_46.log 35.0
dbfs:/datasets/streamingFiles/28_48.log 28_48.log 35.0
dbfs:/datasets/streamingFiles/28_50.log 28_50.log 35.0
dbfs:/datasets/streamingFiles/28_52.log 28_52.log 35.0
dbfs:/datasets/streamingFiles/28_54.log 28_54.log 35.0
dbfs:/datasets/streamingFiles/28_56.log 28_56.log 35.0
dbfs:/datasets/streamingFiles/28_58.log 28_58.log 35.0
dbfs:/datasets/streamingFiles/29_00.log 29_00.log 35.0
dbfs:/datasets/streamingFiles/29_02.log 29_02.log 35.0
dbfs:/datasets/streamingFiles/29_04.log 29_04.log 35.0
dbfs:/datasets/streamingFiles/29_06.log 29_06.log 35.0
dbfs:/datasets/streamingFiles/29_08.log 29_08.log 35.0
dbfs:/datasets/streamingFiles/29_10.log 29_10.log 35.0
dbfs:/datasets/streamingFiles/29_12.log 29_12.log 35.0
dbfs:/datasets/streamingFiles/29_14.log 29_14.log 35.0
dbfs:/datasets/streamingFiles/29_16.log 29_16.log 35.0
dbfs:/datasets/streamingFiles/29_28.log 29_28.log 35.0
dbutils.fs.head("/datasets/streamingFiles/20_16.log")
res1: String =
"2020-11-20 13:20:16+00:00; cat pig
"

Next, let’s create a streaming DataFrame that represents text data received from the directory, and transform the DataFrame to calculate word counts.

import org.apache.spark.sql.types._

// Create DataFrame representing the stream of input lines from files in distributed file store
//val textFileSchema = new StructType().add("line", "string") // for a custom schema

val streamingLines = spark
  .readStream
  //.schema(textFileSchema) // using default -> makes a column of String named value
  .option("MaxFilesPerTrigger", 1) //  maximum number of new files to be considered in every trigger (default: no max) 
  .format("text")
  .load("/datasets/streamingFiles")
import org.apache.spark.sql.types._
streamingLines: org.apache.spark.sql.DataFrame = [value: string]

This streamingLines DataFrame represents an unbounded table containing the streaming text data. This table contains one column of strings named “value”, and each line in the streaming text data becomes a row in the table. Note, that this is not currently receiving any data as we are just setting up the transformation, and have not yet started it.

display(streamingLines)  // display will show you the contents of the DF
value
2020-11-20 13:26:26+00:00; bat owl
2020-11-20 13:28:23+00:00; owl cat
2020-11-20 13:29:31+00:00; dog rat
2020-11-20 13:30:19+00:00; cat dog
2020-11-20 13:28:01+00:00; bat cat
2020-11-20 13:30:11+00:00; cat rat
2020-11-20 13:22:07+00:00; cat dog
2020-11-20 13:22:48+00:00; bat rat
2020-11-20 13:27:22+00:00; owl bat
2020-11-20 13:29:15+00:00; dog cat
2020-11-20 13:29:47+00:00; dog cat
2020-11-20 13:30:09+00:00; cat owl
2020-11-20 13:21:45+00:00; bat rat
2020-11-20 13:22:30+00:00; rat pig
2020-11-20 13:22:42+00:00; bat cat
2020-11-20 13:24:50+00:00; dog bat
2020-11-20 13:25:14+00:00; bat rat
2020-11-20 13:21:25+00:00; dog owl
2020-11-20 13:22:17+00:00; rat owl
2020-11-20 13:27:57+00:00; cat pig
2020-11-20 13:28:19+00:00; pig bat
2020-11-20 13:29:13+00:00; cat pig
2020-11-20 13:30:07+00:00; pig dog
2020-11-20 13:26:04+00:00; dog bat
2020-11-20 13:26:58+00:00; rat bat
2020-11-20 13:29:49+00:00; dog pig
2020-11-20 13:24:22+00:00; cat owl
2020-11-20 13:29:35+00:00; dog owl
2020-11-20 13:30:01+00:00; owl rat
2020-11-20 13:22:32+00:00; dog rat
2020-11-20 13:24:12+00:00; bat dog
2020-11-20 13:25:26+00:00; owl dog
2020-11-20 13:28:13+00:00; owl dog
2020-11-20 13:23:12+00:00; pig owl
2020-11-20 13:24:00+00:00; owl dog
2020-11-20 13:25:22+00:00; bat pig
2020-11-20 13:22:34+00:00; bat dog
2020-11-20 13:25:36+00:00; owl bat
2020-11-20 13:23:04+00:00; owl bat
2020-11-20 13:22:54+00:00; cat dog
2020-11-20 13:23:30+00:00; owl dog
2020-11-20 13:24:58+00:00; pig bat
2020-11-20 13:22:01+00:00; dog rat
2020-11-20 13:23:16+00:00; pig cat
2020-11-20 13:20:53+00:00; pig cat
2020-11-20 13:21:21+00:00; pig owl
2020-11-20 13:24:30+00:00; owl dog
2020-11-20 13:29:07+00:00; rat pig
2020-11-20 13:21:39+00:00; rat cat
2020-11-20 13:27:59+00:00; bat rat
2020-11-20 13:26:42+00:00; bat pig
2020-11-20 13:29:09+00:00; pig cat
2020-11-20 13:25:08+00:00; rat bat
2020-11-20 13:22:58+00:00; cat pig
2020-11-20 13:29:23+00:00; rat pig
2020-11-20 13:21:27+00:00; cat dog
2020-11-20 13:22:50+00:00; rat cat
2020-11-20 13:25:42+00:00; bat owl
2020-11-20 13:28:21+00:00; cat rat
2020-11-20 13:22:52+00:00; dog rat
2020-11-20 13:22:15+00:00; pig owl
2020-11-20 13:22:46+00:00; pig rat
2020-11-20 13:27:32+00:00; pig cat
2020-11-20 13:21:13+00:00; owl cat
2020-11-20 13:20:49+00:00; owl dog
2020-11-20 13:21:37+00:00; pig bat
2020-11-20 13:24:56+00:00; owl bat
2020-11-20 13:25:10+00:00; bat dog
2020-11-20 13:21:07+00:00; dog pig
2020-11-20 13:26:36+00:00; dog bat
2020-11-20 13:27:24+00:00; pig rat
2020-11-20 13:26:24+00:00; pig cat
2020-11-20 13:21:55+00:00; owl pig
2020-11-20 13:20:55+00:00; rat pig
2020-11-20 13:22:26+00:00; bat dog
2020-11-20 13:22:25+00:00; owl dog
2020-11-20 13:24:38+00:00; bat rat
2020-11-20 13:21:19+00:00; pig cat
2020-11-20 13:23:52+00:00; bat pig
2020-11-20 13:21:03+00:00; cat rat
2020-11-20 13:20:57+00:00; bat dog
2020-11-20 13:22:09+00:00; bat rat
2020-11-20 13:21:05+00:00; owl bat
2020-11-20 13:21:23+00:00; dog pig
2020-11-20 13:21:11+00:00; cat owl
2020-11-20 13:31:29+00:00; owl dog
2020-11-20 13:31:31+00:00; dog owl
2020-11-20 13:31:41+00:00; dog owl
2020-11-20 13:28:35+00:00; rat pig
2020-11-20 13:28:09+00:00; rat dog
2020-11-20 13:28:55+00:00; pig rat
2020-11-20 13:29:51+00:00; owl pig
2020-11-20 13:27:10+00:00; pig cat
2020-11-20 13:27:36+00:00; pig owl
2020-11-20 13:31:03+00:00; owl pig
2020-11-20 13:25:46+00:00; rat bat
2020-11-20 13:28:17+00:00; rat cat
2020-11-20 13:28:53+00:00; bat pig
2020-11-20 13:21:47+00:00; owl rat
2020-11-20 13:27:16+00:00; cat bat
2020-11-20 13:21:49+00:00; cat bat
2020-11-20 13:23:44+00:00; cat bat
2020-11-20 13:22:05+00:00; owl cat
2020-11-20 13:26:28+00:00; dog pig
2020-11-20 13:24:46+00:00; cat owl
2020-11-20 13:30:27+00:00; cat dog
2020-11-20 13:27:53+00:00; owl pig
2020-11-20 13:23:10+00:00; cat pig
2020-11-20 13:24:16+00:00; pig rat
2020-11-20 13:23:22+00:00; pig rat
2020-11-20 13:23:34+00:00; pig rat
2020-11-20 13:24:42+00:00; pig owl
2020-11-20 13:22:28+00:00; pig dog
2020-11-20 13:20:47+00:00; bat owl
2020-11-20 13:25:18+00:00; pig bat
2020-11-20 13:25:28+00:00; pig rat
2020-11-20 13:23:56+00:00; owl pig
2020-11-20 13:25:30+00:00; bat cat
2020-11-20 13:32:23+00:00; rat pig
2020-11-20 13:32:37+00:00; rat owl
2020-11-20 13:32:29+00:00; cat dog
2020-11-20 13:32:41+00:00; cat rat
2020-11-20 13:33:02+00:00; rat dog
2020-11-20 13:33:00+00:00; dog owl
2020-11-20 13:31:59+00:00; pig cat
2020-11-20 13:21:09+00:00; cat owl
2020-11-20 13:24:24+00:00; pig cat
2020-11-20 13:23:46+00:00; bat owl
2020-11-20 13:31:43+00:00; dog rat
2020-11-20 13:21:53+00:00; owl dog
2020-11-20 13:32:27+00:00; pig bat
2020-11-20 13:22:38+00:00; cat pig
2020-11-20 13:20:59+00:00; rat dog
2020-11-20 13:22:19+00:00; bat cat
2020-11-20 13:31:15+00:00; bat owl
2020-11-20 13:22:23+00:00; owl pig
2020-11-20 13:32:25+00:00; rat bat
2020-11-20 13:22:44+00:00; cat rat
2020-11-20 13:32:57+00:00; cat dog
2020-11-20 13:25:40+00:00; pig rat
2020-11-20 13:30:45+00:00; pig bat
2020-11-20 13:28:39+00:00; owl cat
2020-11-20 13:30:47+00:00; pig owl
2020-11-20 13:29:17+00:00; bat dog
2020-11-20 13:26:12+00:00; cat owl
2020-11-20 13:21:15+00:00; rat pig
2020-11-20 13:22:11+00:00; pig rat
2020-11-20 13:22:40+00:00; pig owl
2020-11-20 13:21:01+00:00; owl pig
2020-11-20 13:25:50+00:00; dog pig
2020-11-20 13:23:06+00:00; dog rat
2020-11-20 13:26:38+00:00; pig owl
2020-11-20 13:23:14+00:00; cat pig
2020-11-20 13:22:36+00:00; owl cat
2020-11-20 13:22:56+00:00; dog rat
2020-11-20 13:27:38+00:00; dog rat
2020-11-20 13:21:57+00:00; cat rat
2020-11-20 13:26:30+00:00; owl bat
2020-11-20 13:26:06+00:00; rat bat
2020-11-20 13:21:17+00:00; bat cat
2020-11-20 13:21:29+00:00; pig cat
2020-11-20 13:26:40+00:00; dog owl
2020-11-20 13:27:42+00:00; cat owl
2020-11-20 13:29:25+00:00; pig owl
2020-11-20 13:21:35+00:00; pig owl
2020-11-20 13:23:32+00:00; bat owl
2020-11-20 13:23:40+00:00; rat bat
2020-11-20 13:26:16+00:00; rat pig
2020-11-20 13:28:29+00:00; owl pig
2020-11-20 13:27:51+00:00; cat rat
2020-11-20 13:30:13+00:00; owl pig
2020-11-20 13:25:04+00:00; bat cat
2020-11-20 13:24:08+00:00; rat bat
2020-11-20 13:31:27+00:00; dog bat
2020-11-20 13:26:56+00:00; pig bat
2020-11-20 13:33:40+00:00; owl cat
2020-11-20 13:33:48+00:00; owl bat
2020-11-20 13:34:14+00:00; dog bat
2020-11-20 13:33:30+00:00; bat pig
2020-11-20 13:33:38+00:00; owl pig
2020-11-20 13:33:24+00:00; cat owl
2020-11-20 13:34:08+00:00; bat owl
2020-11-20 13:33:10+00:00; dog pig
2020-11-20 13:33:32+00:00; bat cat
2020-11-20 13:33:44+00:00; bat rat
2020-11-20 13:30:03+00:00; bat rat
2020-11-20 13:30:31+00:00; cat owl
2020-11-20 13:32:33+00:00; cat pig
2020-11-20 13:24:40+00:00; pig cat
2020-11-20 13:26:22+00:00; owl bat
2020-11-20 13:28:05+00:00; cat bat
2020-11-20 13:31:11+00:00; owl cat
2020-11-20 13:23:58+00:00; owl bat
2020-11-20 13:25:06+00:00; cat pig
2020-11-20 13:31:23+00:00; rat cat
2020-11-20 13:23:48+00:00; dog rat
2020-11-20 13:26:52+00:00; owl bat
2020-11-20 13:23:36+00:00; owl rat
2020-11-20 13:21:51+00:00; rat pig
2020-11-20 13:21:43+00:00; bat owl
2020-11-20 13:32:49+00:00; owl bat
2020-11-20 13:23:20+00:00; rat pig
2020-11-20 13:30:53+00:00; bat owl
2020-11-20 13:26:46+00:00; bat cat
2020-11-20 13:21:31+00:00; cat bat
2020-11-20 13:28:43+00:00; rat pig
2020-11-20 13:30:29+00:00; owl cat
2020-11-20 13:33:22+00:00; dog pig
2020-11-20 13:31:21+00:00; bat owl
2020-11-20 13:32:39+00:00; pig dog
2020-11-20 13:21:41+00:00; pig dog
2020-11-20 13:26:10+00:00; bat rat
2020-11-20 13:27:00+00:00; pig owl
2020-11-20 13:22:03+00:00; cat dog
2020-11-20 13:22:21+00:00; dog pig
2020-11-20 13:30:05+00:00; bat dog
2020-11-20 13:25:02+00:00; cat bat
2020-11-20 13:24:02+00:00; pig bat
2020-11-20 13:27:18+00:00; cat pig
2020-11-20 13:23:08+00:00; cat bat
2020-11-20 13:28:03+00:00; pig cat
2020-11-20 13:31:05+00:00; dog pig
2020-11-20 13:26:18+00:00; owl rat
2020-11-20 13:23:42+00:00; cat dog
2020-11-20 13:23:24+00:00; bat owl
2020-11-20 13:24:14+00:00; dog rat
2020-11-20 13:25:24+00:00; dog bat
2020-11-20 13:27:40+00:00; dog rat
2020-11-20 13:23:50+00:00; cat pig
2020-11-20 13:27:48+00:00; rat cat
2020-11-20 13:30:25+00:00; dog bat
2020-11-20 13:30:21+00:00; dog bat
2020-11-20 13:30:35+00:00; cat owl
2020-11-20 13:25:38+00:00; owl rat
2020-11-20 13:31:01+00:00; owl pig
2020-11-20 13:21:33+00:00; rat bat
2020-11-20 13:28:59+00:00; owl cat
2020-11-20 13:33:16+00:00; cat owl
2020-11-20 13:35:36+00:00; dog bat
2020-11-20 13:34:36+00:00; rat pig
2020-11-20 13:34:34+00:00; pig dog
2020-11-20 13:35:26+00:00; owl rat
2020-11-20 13:34:58+00:00; pig bat
2020-11-20 13:35:24+00:00; cat bat
2020-11-20 13:35:22+00:00; dog pig
2020-11-20 13:35:44+00:00; dog pig
2020-11-20 13:34:26+00:00; cat rat
2020-11-20 13:35:18+00:00; bat dog
2020-11-20 13:29:43+00:00; owl cat
2020-11-20 13:31:39+00:00; bat rat
2020-11-20 13:25:48+00:00; bat pig
2020-11-20 13:27:08+00:00; cat rat
2020-11-20 13:28:47+00:00; pig bat
2020-11-20 13:29:03+00:00; bat rat
2020-11-20 13:31:55+00:00; bat cat
2020-11-20 13:32:43+00:00; cat dog
2020-11-20 13:35:34+00:00; cat bat
2020-11-20 13:24:52+00:00; cat pig
2020-11-20 13:28:15+00:00; cat rat
2020-11-20 13:30:37+00:00; bat cat
2020-11-20 13:24:10+00:00; owl pig
2020-11-20 13:25:34+00:00; pig rat
2020-11-20 13:31:51+00:00; dog owl
2020-11-20 13:29:39+00:00; rat dog
2020-11-20 13:24:32+00:00; bat owl

Next, we will convert the DataFrame to a Dataset of String using .as[String], so that we can apply the flatMap operation to split each line into multiple words. The resultant words Dataset contains all the words.

val words = streamingLines.as[String]
                          .map(line => line.split(";").drop(1)(0)) // this is to simply cut out the timestamp from this stream
                          .flatMap(_.split(" ")) // flat map by splitting the animal words separated by whitespace
                          .filter( _ != "") // remove empty words that may be artifacts of opening whitespace
words: org.apache.spark.sql.Dataset[String] = [value: string]

Finally, we define the wordCounts DataFrame by grouping by the unique values in the Dataset and counting them. Note that this is a streaming DataFrame which represents the running word counts of the stream.

// Generate running word count
val wordCounts = words
                  .groupBy("value").count() // this does the word count
                  .orderBy($"count".desc) // we are simply sorting by the most frequent words
wordCounts: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [value: string, count: bigint]

We have now set up the query on the streaming data. All that is left is to actually start receiving data and computing the counts. To do this, we set it up to print the complete set of counts (specified by outputMode("complete")) to the console every time they are updated. And then start the streaming computation using start().

// Start running the query that prints the running counts to the console
val query = wordCounts.writeStream
      .outputMode("complete")
      .format("console")
      .start()

query.awaitTermination() // hit cancel to terminate - killall the bash script in 037a_AnimalNamesStructStreamingFiles
-------------------------------------------
Batch: 0
-------------------------------------------
+-----+-----+
|value|count|
+-----+-----+
|  cat|    1|
|  owl|    1|
+-----+-----+

-------------------------------------------
Batch: 1
-------------------------------------------
+-----+-----+
|value|count|
+-----+-----+
|  cat|    2|
|  pig|    1|
|  owl|    1|
+-----+-----+

-------------------------------------------
Batch: 2
-------------------------------------------
+-----+-----+
|value|count|
+-----+-----+
|  cat|    2|
|  pig|    2|
|  owl|    1|
|  dog|    1|
+-----+-----+

-------------------------------------------
Batch: 3
-------------------------------------------
+-----+-----+
|value|count|
+-----+-----+
|  cat|    3|
|  dog|    2|
|  pig|    2|
|  owl|    1|
+-----+-----+

-------------------------------------------
Batch: 4
-------------------------------------------
+-----+-----+
|value|count|
+-----+-----+
|  cat|    4|
|  pig|    3|
|  dog|    2|
|  owl|    1|
+-----+-----+

-------------------------------------------
Batch: 5
-------------------------------------------
+-----+-----+
|value|count|
+-----+-----+
|  pig|    4|
|  cat|    4|
|  dog|    2|
|  owl|    1|
|  rat|    1|
+-----+-----+

-------------------------------------------
Batch: 6
-------------------------------------------
+-----+-----+
|value|count|
+-----+-----+
|  pig|    5|
|  cat|    4|
|  dog|    2|
|  owl|    2|
|  rat|    1|
+-----+-----+

-------------------------------------------
Batch: 7
-------------------------------------------
+-----+-----+
|value|count|
+-----+-----+
|  pig|    5|
|  cat|    4|
|  owl|    3|
|  dog|    2|
|  rat|    1|
|  bat|    1|
+-----+-----+

-------------------------------------------
Batch: 8
-------------------------------------------
+-----+-----+
|value|count|
+-----+-----+
|  pig|    5|
|  cat|    4|
|  owl|    4|
|  dog|    2|
|  bat|    2|
|  rat|    1|
+-----+-----+

-------------------------------------------
Batch: 9
-------------------------------------------
+-----+-----+
|value|count|
+-----+-----+
|  pig|    5|
|  owl|    5|
|  cat|    4|
|  bat|    3|
|  dog|    2|
|  rat|    1|
+-----+-----+

-------------------------------------------
Batch: 10
-------------------------------------------
+-----+-----+
|value|count|
+-----+-----+
|  owl|    5|
|  pig|    5|
|  bat|    4|
|  cat|    4|
|  dog|    2|
|  rat|    2|
+-----+-----+

After this code is executed, the streaming computation will have started in the background. The query object is a handle to that active streaming query, and we have decided to wait for the termination of the query using awaitTermination() to prevent the process from exiting while the query is active.

Handling Event-time and Late Data

Event-time is the time embedded in the data itself. For many applications, you may want to operate on this event-time. For example, if you want to get the number of events generated by IoT devices every minute, then you probably want to use the time when the data was generated (that is, event-time in the data), rather than the time Spark receives them. This event-time is very naturally expressed in this model – each event from the devices is a row in the table, and event-time is a column value in the row. This allows window-based aggregations (e.g. number of events every minute) to be just a special type of grouping and aggregation on the event-time column – each time window is a group and each row can belong to multiple windows/groups. Therefore, such event-time-window-based aggregation queries can be defined consistently on both a static dataset (e.g. from collected device events logs) as well as on a data stream, making the life of the user much easier.

Furthermore, this model naturally handles data that has arrived later than expected based on its event-time. Since Spark is updating the Result Table, it has full control over updating old aggregates when there is late data, as well as cleaning up old aggregates to limit the size of intermediate state data. Since Spark 2.1, we have support for watermarking which allows the user to specify the threshold of late data, and allows the engine to accordingly clean up old state. These are explained later in more detail in the Window Operations section below.

Fault Tolerance Semantics

Delivering end-to-end exactly-once semantics was one of key goals behind the design of Structured Streaming. To achieve that, we have designed the Structured Streaming sources, the sinks and the execution engine to reliably track the exact progress of the processing so that it can handle any kind of failure by restarting and/or reprocessing. Every streaming source is assumed to have offsets (similar to Kafka offsets, or Kinesis sequence numbers) to track the read position in the stream. The engine uses checkpointing and write ahead logs to record the offset range of the data being processed in each trigger. The streaming sinks are designed to be idempotent for handling reprocessing. Together, using replayable sources and idempotent sinks, Structured Streaming can ensure end-to-end exactly-once semantics under any failure.

API using Datasets and DataFrames

Since Spark 2.0, DataFrames and Datasets can represent static, bounded data, as well as streaming, unbounded data. Similar to static Datasets/DataFrames, you can use the common entry point SparkSession (Scala/Java/Python/R docs) to create streaming DataFrames/Datasets from streaming sources, and apply the same operations on them as static DataFrames/Datasets. If you are not familiar with Datasets/DataFrames, you are strongly advised to familiarize yourself with them using the DataFrame/Dataset Programming Guide.

Creating streaming DataFrames and streaming Datasets

Streaming DataFrames can be created through the DataStreamReader interface (Scala/Java/Python docs) returned by SparkSession.readStream(). In R, with the read.stream() method. Similar to the read interface for creating static DataFrame, you can specify the details of the source – data format, schema, options, etc.

Input Sources

In Spark 2.0, there are a few built-in sources.

  • File source - Reads files written in a directory as a stream of data. Supported file formats are text, csv, json, parquet. See the docs of the DataStreamReader interface for a more up-to-date list, and supported options for each file format. Note that the files must be atomically placed in the given directory, which in most file systems, can be achieved by file move operations.

  • Kafka source - Poll data from Kafka. It’s compatible with Kafka broker versions 0.10.0 or higher. See the Kafka Integration Guide for more details.

  • Socket source (for testing) - Reads UTF8 text data from a socket connection. The listening server socket is at the driver. Note that this should be used only for testing as this does not provide end-to-end fault-tolerance guarantees.

Some sources are not fault-tolerant because they do not guarantee that data can be replayed using checkpointed offsets after a failure. See the earlier section on fault-tolerance semantics. Here are the details of all the sources in Spark.

Source Options Fault-tolerant Notes
File source path: path to the input directory, and common to all file formats.
maxFilesPerTrigger: maximum number of new files to be considered in every trigger (default: no max)
latestFirst: whether to processs the latest new files first, useful when there is a large backlog of files (default: false)
fileNameOnly: whether to check new files based on only the filename instead of on the full path (default: false). With this set to `true`, the following files would be considered as the same file, because their filenames, "dataset.txt", are the same:
· "file:///dataset.txt"
· "s3://a/dataset.txt"
· "s3n://a/b/dataset.txt"
· "s3a://a/b/c/dataset.txt"

    <br />
    For file-format-specific options, see the related methods in <code>DataStreamReader</code>
    (<a href="https://spark.apache.org/docs/2.2.0/api/scala/index.html#org.apache.spark.sql.streaming.DataStreamReader">Scala</a>/<a href="https://spark.apache.org/docs/2.2.0/api/java/org/apache/spark/sql/streaming/DataStreamReader.html">Java</a>/<a href="https://spark.apache.org/docs/2.2.0/api/python/pyspark.sql.html#pyspark.sql.streaming.DataStreamReader">Python</a>/<a href="https://spark.apache.org/docs/2.2.0/api/R/read.stream.html">R</a>).
    E.g. for "parquet" format options see <code>DataStreamReader.parquet()</code></td>
<td>Yes</td>
<td>Supports glob paths, but does not support multiple comma-separated paths/globs.</td>
Socket Source host: host to connect to, must be specified
port: port to connect to, must be specified
No
Kafka Source See the Kafka Integration Guide. Yes

See https://spark.apache.org/docs/2.2.0/structured-streaming-programming-guide.html#input-sources.

Schema inference and partition of streaming DataFrames/Datasets

By default, Structured Streaming from file based sources requires you to specify the schema, rather than rely on Spark to infer it automatically (this is what we did with userSchema above). This restriction ensures a consistent schema will be used for the streaming query, even in the case of failures. For ad-hoc use cases, you can reenable schema inference by setting spark.sql.streaming.schemaInference to true.

Partition discovery does occur when subdirectories that are named /key=value/ are present and listing will automatically recurse into these directories. If these columns appear in the user provided schema, they will be filled in by Spark based on the path of the file being read. The directories that make up the partitioning scheme must be present when the query starts and must remain static. For example, it is okay to add /data/year=2016/ when /data/year=2015/ was present, but it is invalid to change the partitioning column (i.e. by creating the directory /data/date=2016-04-17/).

Operations on streaming DataFrames/Datasets

You can apply all kinds of operations on streaming DataFrames/Datasets – ranging from untyped, SQL-like operations (e.g. select, where, groupBy), to typed RDD-like operations (e.g. map, filter, flatMap). See the SQL programming guide for more details. Let’s take a look at a few example operations that you can use.

Basic Operations - Selection, Projection, Aggregation

Most of the common operations on DataFrame/Dataset are supported for streaming. The few operations that are not supported are discussed later in unsupported-operations section.

    case class DeviceData(device: String, deviceType: String, signal: Double, time: DateTime)

    val df: DataFrame = ... // streaming DataFrame with IOT device data with schema { device: string, deviceType: string, signal: double, time: string }
    val ds: Dataset[DeviceData] = df.as[DeviceData]    // streaming Dataset with IOT device data

    // Select the devices which have signal more than 10
    df.select("device").where("signal > 10")      // using untyped APIs
    ds.filter(_.signal > 10).map(_.device)         // using typed APIs

    // Running count of the number of updates for each device type
    df.groupBy("deviceType").count()                          // using untyped API

    // Running average signal for each device type
    import org.apache.spark.sql.expressions.scalalang.typed
    ds.groupByKey(_.deviceType).agg(typed.avg(_.signal))    // using typed API

A Quick Mixture Example

We will work below with a file stream that simulates random animal names or a simple mixture of two Normal Random Variables.

The two file streams can be acieved by running the codes in the following two databricks notebooks in the same cluster:

  • 037a_AnimalNamesStructStreamingFiles
  • 037b_Mix2NormalsStructStreamingFiles

You should have the following set of csv files (it won't be exactly the same names depending on when you start the stream of files).

display(dbutils.fs.ls("/datasets/streamingFilesNormalMixture/"))
path name size
dbfs:/datasets/streamingFilesNormalMixture/48_11/ 48_11/ 0.0
dbfs:/datasets/streamingFilesNormalMixture/48_19/ 48_19/ 0.0
dbfs:/datasets/streamingFilesNormalMixture/48_26/ 48_26/ 0.0
dbfs:/datasets/streamingFilesNormalMixture/48_36/ 48_36/ 0.0
dbfs:/datasets/streamingFilesNormalMixture/48_43/ 48_43/ 0.0

Static and Streaming DataFrames

Let's check out the files and their contents both via static as well as streaming DataFrames.

This will also cement the fact that structured streaming allows interoperability between static and streaming data and can be useful for debugging.

val peekIn = spark.read.format("csv").load("/datasets/streamingFilesNormalMixture/*/*.csv")
peekIn.count() // total count of all the samples in all the files
peekIn: org.apache.spark.sql.DataFrame = [_c0: string, _c1: string]
res8: Long = 500
peekIn.show(5, false) // let's take a quick peek at what's in the CSV files
+-----------------------+--------------------+
|_c0                    |_c1                 |
+-----------------------+--------------------+
|2020-11-16 10:48:25.294|0.21791376679544772 |
|2020-11-16 10:48:25.299|0.011291967445604012|
|2020-11-16 10:48:25.304|-0.30293144696154806|
|2020-11-16 10:48:25.309|0.4303254534802833  |
|2020-11-16 10:48:25.314|1.5521304466388752  |
+-----------------------+--------------------+
only showing top 5 rows
// Read all the csv files written atomically from a directory
import org.apache.spark.sql.types._

//make a user-specified schema - this is needed for structured streaming from files
val userSchema = new StructType()
                      .add("time", "timestamp")
                      .add("score", "Double")

// a static DF is convenient 
val csvStaticDF = spark
  .read
  .option("sep", ",") // delimiter is ','
  .schema(userSchema) // Specify schema of the csv files as pre-defined by user
  .csv("/datasets/streamingFilesNormalMixture/*/*.csv")    // Equivalent to format("csv").load("/path/to/directory")

// streaming DF
val csvStreamingDF = spark
  .readStream
  .option("sep", ",") // delimiter is ','
  .schema(userSchema) // Specify schema of the csv files as pre-defined by user
  .option("MaxFilesPerTrigger", 1) //  maximum number of new files to be considered in every trigger (default: no max) 
  .csv("/datasets/streamingFilesNormalMixture/*/*.csv")    // Equivalent to format("csv").load("/path/to/directory")
import org.apache.spark.sql.types._
userSchema: org.apache.spark.sql.types.StructType = StructType(StructField(time,TimestampType,true), StructField(score,DoubleType,true))
csvStaticDF: org.apache.spark.sql.DataFrame = [time: timestamp, score: double]
csvStreamingDF: org.apache.spark.sql.DataFrame = [time: timestamp, score: double]
csvStreamingDF.isStreaming    // Returns True for DataFrames that have streaming sources
res12: Boolean = true
csvStreamingDF.printSchema
root
 |-- time: timestamp (nullable = true)
 |-- score: double (nullable = true)
display(csvStreamingDF) // if you want to see the stream coming at you as csvDF
time score
2020-11-16T10:48:11.194+0000 0.2576188264990721
2020-11-16T10:48:11.199+0000 -0.13149698512045327
2020-11-16T10:48:11.204+0000 1.4139063973267458
2020-11-16T10:48:11.209+0000 -2.3833875968513496e-2
2020-11-16T10:48:11.215+0000 0.7274784426774964
2020-11-16T10:48:11.220+0000 -1.0658630481235276
2020-11-16T10:48:11.225+0000 0.746959841932221
2020-11-16T10:48:11.230+0000 0.30477096247050206
2020-11-16T10:48:11.235+0000 -6.407620682061621e-2
2020-11-16T10:48:11.241+0000 1.8464307210258604
2020-11-16T10:48:11.246+0000 2.0786529531264355
2020-11-16T10:48:11.251+0000 0.685838993990332
2020-11-16T10:48:11.256+0000 2.3056211153362485
2020-11-16T10:48:11.261+0000 -0.7435548094085835
2020-11-16T10:48:11.267+0000 -0.36946067155650786
2020-11-16T10:48:11.272+0000 1.1178132434092503
2020-11-16T10:48:11.277+0000 1.0672400098827672
2020-11-16T10:48:11.282+0000 2.403799182291664
2020-11-16T10:48:11.287+0000 2.7905949803662926
2020-11-16T10:48:11.293+0000 2.3901047303648846
2020-11-16T10:48:11.298+0000 2.2391322699010967
2020-11-16T10:48:11.303+0000 0.7102559487906945
2020-11-16T10:48:11.308+0000 -0.1875570296359037
2020-11-16T10:48:11.313+0000 2.0036998039560725
2020-11-16T10:48:11.318+0000 2.028162246705019
2020-11-16T10:48:11.324+0000 -1.1084782237141253
2020-11-16T10:48:11.329+0000 2.7320985336302965
2020-11-16T10:48:11.334+0000 1.7953021498619885
2020-11-16T10:48:11.339+0000 1.3332433299615185
2020-11-16T10:48:11.344+0000 1.2842120504662247
2020-11-16T10:48:11.349+0000 2.0013530061962186
2020-11-16T10:48:11.355+0000 1.2596569236824775
2020-11-16T10:48:11.360+0000 2.46479668588018
2020-11-16T10:48:11.365+0000 -0.7015927727061835
2020-11-16T10:48:11.370+0000 -0.510611131534981
2020-11-16T10:48:11.375+0000 0.9403812557496112
2020-11-16T10:48:11.381+0000 2.2306482205877427
2020-11-16T10:48:11.386+0000 -0.29781070820511246
2020-11-16T10:48:11.391+0000 4.107241990001628
2020-11-16T10:48:11.396+0000 0.7420568724108764
2020-11-16T10:48:11.401+0000 1.4652231673746594
2020-11-16T10:48:11.407+0000 0.8793849318247119
2020-11-16T10:48:11.412+0000 1.7671614106752898
2020-11-16T10:48:11.417+0000 1.1995772213743607
2020-11-16T10:48:11.422+0000 1.1351566745099897
2020-11-16T10:48:11.427+0000 0.16150528245701323
2020-11-16T10:48:11.432+0000 2.459849452657596
2020-11-16T10:48:11.438+0000 1.0796739450956971
2020-11-16T10:48:11.443+0000 -1.2079899446434252
2020-11-16T10:48:11.448+0000 0.7019279468450133
2020-11-16T10:48:11.453+0000 -2.5906759976580096e-2
2020-11-16T10:48:11.458+0000 1.025799236502406
2020-11-16T10:48:11.463+0000 2.423754193708396
2020-11-16T10:48:11.469+0000 1.0100073192180106
2020-11-16T10:48:11.474+0000 1.2308412912433588
2020-11-16T10:48:11.479+0000 2.2142939785873326
2020-11-16T10:48:11.484+0000 9.639219241219372
2020-11-16T10:48:11.489+0000 0.8964067897832677
2020-11-16T10:48:11.494+0000 2.583753664296168
2020-11-16T10:48:11.499+0000 1.7326439212827238
2020-11-16T10:48:11.505+0000 0.7516388863094139
2020-11-16T10:48:11.510+0000 0.8725633940449549
2020-11-16T10:48:11.515+0000 -0.9407676766254014
2020-11-16T10:48:11.520+0000 1.0542712925875175
2020-11-16T10:48:11.525+0000 0.794535189312687
2020-11-16T10:48:11.530+0000 0.5813794557982226
2020-11-16T10:48:11.536+0000 0.4891368786472011
2020-11-16T10:48:11.541+0000 2.3296394918008474
2020-11-16T10:48:11.546+0000 1.425296303524094
2020-11-16T10:48:11.551+0000 1.9276679925454094
2020-11-16T10:48:11.556+0000 0.6178050147872097
2020-11-16T10:48:11.561+0000 1.135269636375052
2020-11-16T10:48:11.567+0000 1.3074367248762568
2020-11-16T10:48:11.572+0000 0.6105659268751382
2020-11-16T10:48:11.577+0000 1.7812955395572572
2020-11-16T10:48:11.582+0000 -1.3547368916771827
2020-11-16T10:48:11.587+0000 1.580412775615275
2020-11-16T10:48:11.592+0000 1.5731144914401023
2020-11-16T10:48:11.597+0000 -5.725067553082108e-2
2020-11-16T10:48:11.603+0000 0.19580347035995105
2020-11-16T10:48:11.608+0000 -2.1501122555202867e-2
2020-11-16T10:48:11.613+0000 1.5783579658949254
2020-11-16T10:48:11.618+0000 1.371796305513024
2020-11-16T10:48:11.623+0000 0.648919899258448
2020-11-16T10:48:11.628+0000 -0.7875773550339058
2020-11-16T10:48:11.633+0000 1.3233945353130716
2020-11-16T10:48:11.639+0000 2.5685224032022127
2020-11-16T10:48:11.644+0000 2.7331317575905807
2020-11-16T10:48:11.649+0000 0.2521381731074053
2020-11-16T10:48:11.654+0000 2.2408918489807905
2020-11-16T10:48:11.659+0000 1.4924862197354933
2020-11-16T10:48:11.664+0000 1.194657083531184
2020-11-16T10:48:11.670+0000 0.7067352811215412
2020-11-16T10:48:11.675+0000 2.7701718519244745e-2
2020-11-16T10:48:11.681+0000 0.279797547315617
2020-11-16T10:48:11.686+0000 -0.21953266770586133
2020-11-16T10:48:11.691+0000 1.1402931320647434
2020-11-16T10:48:11.696+0000 0.904724947360263
2020-11-16T10:48:11.702+0000 0.6677145203694429
2020-11-16T10:48:11.707+0000 2.019977647420342
2020-11-16T10:48:18.539+0000 -0.5190278662580565
2020-11-16T10:48:18.545+0000 1.2549405940975034
2020-11-16T10:48:18.550+0000 2.4267606721380233
2020-11-16T10:48:18.555+0000 0.21858105660909444
2020-11-16T10:48:18.560+0000 1.7701229392924476
2020-11-16T10:48:18.566+0000 8.326770280505069e-2
2020-11-16T10:48:18.571+0000 11.539205812425335
2020-11-16T10:48:18.576+0000 0.612370126029857
2020-11-16T10:48:18.581+0000 1.299073306785623
2020-11-16T10:48:18.586+0000 2.6939073650678083
2020-11-16T10:48:18.592+0000 2.5320627406973344
2020-11-16T10:48:18.597+0000 2.781337457744293e-2
2020-11-16T10:48:18.602+0000 0.3272489908510584
2020-11-16T10:48:18.607+0000 -0.9427386544836929
2020-11-16T10:48:18.613+0000 0.9364640268126377
2020-11-16T10:48:18.618+0000 1.919225736153371
2020-11-16T10:48:18.623+0000 0.38826998132506296
2020-11-16T10:48:18.628+0000 -0.38655650387475715
2020-11-16T10:48:18.633+0000 1.0433731216978939
2020-11-16T10:48:18.638+0000 1.1500718903613745
2020-11-16T10:48:18.644+0000 -0.3661280681150447
2020-11-16T10:48:18.649+0000 0.883444064705467
2020-11-16T10:48:18.654+0000 -0.9126173899348853
2020-11-16T10:48:18.659+0000 0.3838114564837034
2020-11-16T10:48:18.665+0000 0.7935189081504388
2020-11-16T10:48:18.670+0000 1.928137393349846
2020-11-16T10:48:18.675+0000 4.7092811957255676e-2
2020-11-16T10:48:18.680+0000 0.4684849965794433
2020-11-16T10:48:18.685+0000 0.6745536358089256
2020-11-16T10:48:18.691+0000 2.100439331925503
2020-11-16T10:48:18.696+0000 1.0053957395581328
2020-11-16T10:48:18.701+0000 1.1651633690031988
2020-11-16T10:48:18.706+0000 1.1620631665685186
2020-11-16T10:48:18.711+0000 0.5686294459758102
2020-11-16T10:48:18.717+0000 5.4695916815372114e-2
2020-11-16T10:48:18.722+0000 0.3673527645506809
2020-11-16T10:48:18.727+0000 1.1825682382920246
2020-11-16T10:48:18.732+0000 2.590900208851957
2020-11-16T10:48:18.738+0000 0.9580677196122074
2020-11-16T10:48:18.743+0000 0.14058634902492095
2020-11-16T10:48:18.748+0000 1.835715236145623
2020-11-16T10:48:18.753+0000 1.0262133311924941
2020-11-16T10:48:18.758+0000 2.3956360313411276
2020-11-16T10:48:18.763+0000 -0.42622276533874537
2020-11-16T10:48:18.769+0000 1.532866051791267
2020-11-16T10:48:18.774+0000 0.33837135147986275
2020-11-16T10:48:18.779+0000 0.5993221970260502
2020-11-16T10:48:18.784+0000 0.5268259369536397
2020-11-16T10:48:18.789+0000 0.9338448405595184
2020-11-16T10:48:18.795+0000 1.5020324977316601
2020-11-16T10:48:18.800+0000 -0.21633343524824378
2020-11-16T10:48:18.805+0000 0.8387080531274844
2020-11-16T10:48:18.810+0000 1.3278878139665884e-2
2020-11-16T10:48:18.815+0000 1.3291762275434373
2020-11-16T10:48:18.820+0000 0.4837833343304839
2020-11-16T10:48:18.826+0000 0.4918446444728072
2020-11-16T10:48:18.831+0000 1.354678573169704
2020-11-16T10:48:18.836+0000 0.2524216007924791
2020-11-16T10:48:18.841+0000 0.5965026762340784
2020-11-16T10:48:18.846+0000 2.000850130836448
2020-11-16T10:48:18.851+0000 2.217169275505519
2020-11-16T10:48:18.857+0000 0.6876140376775531
2020-11-16T10:48:18.862+0000 1.0508210912529563
2020-11-16T10:48:18.867+0000 1.65676102704454
2020-11-16T10:48:18.872+0000 2.155047641017994
2020-11-16T10:48:18.877+0000 1.0866488363653375
2020-11-16T10:48:18.882+0000 1.0691398773308363
2020-11-16T10:48:18.888+0000 0.6120836384011098
2020-11-16T10:48:18.893+0000 0.24914099314834415
2020-11-16T10:48:18.898+0000 2.8691481936548744
2020-11-16T10:48:18.903+0000 0.7633561289177443
2020-11-16T10:48:18.908+0000 1.4483835248568062
2020-11-16T10:48:18.913+0000 2.6108825545691863
2020-11-16T10:48:18.918+0000 1.2751533422561458
2020-11-16T10:48:18.924+0000 1.0131179898567302
2020-11-16T10:48:18.929+0000 0.46308679994249036
2020-11-16T10:48:18.935+0000 0.7793261962344651
2020-11-16T10:48:18.940+0000 1.1671037114122738
2020-11-16T10:48:18.945+0000 2.143874895015684
2020-11-16T10:48:18.950+0000 1.2344250301306705
2020-11-16T10:48:18.955+0000 1.7402355361851662
2020-11-16T10:48:18.960+0000 1.0396911219696297
2020-11-16T10:48:18.966+0000 1.8089030277370215
2020-11-16T10:48:18.971+0000 2.1235708326267533
2020-11-16T10:48:18.976+0000 -0.33938888075466234
2020-11-16T10:48:18.981+0000 1.090463095441436
2020-11-16T10:48:18.986+0000 1.3101016219338661
2020-11-16T10:48:18.992+0000 -0.6251493773996968
2020-11-16T10:48:18.998+0000 1.7223308331307168
2020-11-16T10:48:19.003+0000 1.0299845635585438
2020-11-16T10:48:19.009+0000 1.962846046162154
2020-11-16T10:48:19.014+0000 -1.8537289273720337e-2
2020-11-16T10:48:19.019+0000 0.7977254725466605
2020-11-16T10:48:19.024+0000 -0.21427479370557312
2020-11-16T10:48:19.029+0000 -1.6661289018266037
2020-11-16T10:48:19.034+0000 1.144457447997468
2020-11-16T10:48:19.043+0000 0.6503516296653954
2020-11-16T10:48:19.048+0000 6.581335919503728e-2
2020-11-16T10:48:19.053+0000 1.5478749815243467
2020-11-16T10:48:19.058+0000 1.5497411627601851
2020-11-16T10:48:25.294+0000 0.21791376679544772
2020-11-16T10:48:25.299+0000 1.1291967445604012e-2
2020-11-16T10:48:25.304+0000 -0.30293144696154806
2020-11-16T10:48:25.309+0000 0.4303254534802833
2020-11-16T10:48:25.314+0000 1.5521304466388752
2020-11-16T10:48:25.319+0000 2.2910302464408394
2020-11-16T10:48:25.325+0000 0.4374695472538803
2020-11-16T10:48:25.330+0000 0.4085186427342812
2020-11-16T10:48:25.335+0000 -6.531316403553289e-2
2020-11-16T10:48:25.340+0000 6.39812257122474e-3
2020-11-16T10:48:25.345+0000 0.24840501087934996
2020-11-16T10:48:25.350+0000 -1.021974709142702
2020-11-16T10:48:25.355+0000 -9.233941622902653e-2
2020-11-16T10:48:25.361+0000 0.41027379764960337
2020-11-16T10:48:25.366+0000 1.864567223228712
2020-11-16T10:48:25.371+0000 1.5393474896194466
2020-11-16T10:48:25.376+0000 1.124907339909468
2020-11-16T10:48:25.381+0000 2.0206475875654997
2020-11-16T10:48:25.386+0000 -0.7058862229186389
2020-11-16T10:48:25.392+0000 1.2344926787652002
2020-11-16T10:48:25.397+0000 1.1406194673922239
2020-11-16T10:48:25.402+0000 1.4084552620839659
2020-11-16T10:48:25.407+0000 0.739931161380885
2020-11-16T10:48:25.412+0000 0.29958396894640427
2020-11-16T10:48:25.417+0000 -0.9379262816791101
2020-11-16T10:48:25.422+0000 0.8259556704405835
2020-11-16T10:48:25.428+0000 -0.3199802616466474
2020-11-16T10:48:25.433+0000 1.9656420693625898
2020-11-16T10:48:25.438+0000 0.8789984776053141
2020-11-16T10:48:25.443+0000 2.4965042040211793
2020-11-16T10:48:25.448+0000 1.714778861431627
2020-11-16T10:48:25.454+0000 0.8669641143187272
2020-11-16T10:48:25.459+0000 1.0757413525008879
2020-11-16T10:48:25.464+0000 1.9658378382249264e-2
2020-11-16T10:48:25.469+0000 0.7165095911306543
2020-11-16T10:48:25.474+0000 1.2251547673860115
2020-11-16T10:48:25.479+0000 1.5869187313570912
2020-11-16T10:48:25.485+0000 0.3928727449886338
2020-11-16T10:48:25.490+0000 1.7722759642539445
2020-11-16T10:48:25.495+0000 1.0350331272239843
2020-11-16T10:48:25.500+0000 -1.4234008750858624
2020-11-16T10:48:25.505+0000 0.6054572828043063
2020-11-16T10:48:25.511+0000 0.3024585268617903
2020-11-16T10:48:25.516+0000 2.9432999768948087e-2
2020-11-16T10:48:25.521+0000 0.9382472473173075
2020-11-16T10:48:25.526+0000 2.11287419383702
2020-11-16T10:48:25.531+0000 1.0876022969280528
2020-11-16T10:48:25.536+0000 0.36548993902899596
2020-11-16T10:48:25.542+0000 -2.005053653271253
2020-11-16T10:48:25.547+0000 2.0367928918435894
2020-11-16T10:48:25.552+0000 9.261254419611942e-2
2020-11-16T10:48:25.557+0000 2.156248406806113
2020-11-16T10:48:25.562+0000 -0.5295405173638772
2020-11-16T10:48:25.568+0000 2.452318995994742
2020-11-16T10:48:25.573+0000 0.8636413385915132
2020-11-16T10:48:25.578+0000 0.31460938814139794
2020-11-16T10:48:25.583+0000 -2.0257131370059023e-2
2020-11-16T10:48:25.588+0000 1.3213739526626505
2020-11-16T10:48:25.593+0000 0.9463001869917488
2020-11-16T10:48:25.599+0000 0.986171393681171
2020-11-16T10:48:25.604+0000 0.12492672949874628
2020-11-16T10:48:25.609+0000 0.9908400692267174
2020-11-16T10:48:25.614+0000 1.0695623856543282
2020-11-16T10:48:25.621+0000 1.0221220766637027
2020-11-16T10:48:25.627+0000 2.8492797946693904
2020-11-16T10:48:25.632+0000 1.0609742751901396
2020-11-16T10:48:25.637+0000 1.6409490831011158
2020-11-16T10:48:25.642+0000 1.5427085071446491
2020-11-16T10:48:25.647+0000 1.7312859942989034
2020-11-16T10:48:25.653+0000 1.2947069326850533
2020-11-16T10:48:25.658+0000 0.3756138591369289
2020-11-16T10:48:25.663+0000 1.4349084022701803
2020-11-16T10:48:25.668+0000 0.37649651121290106
2020-11-16T10:48:25.673+0000 0.7071860096564935
2020-11-16T10:48:25.679+0000 1.5065536846394356
2020-11-16T10:48:25.684+0000 0.15009861698305105
2020-11-16T10:48:25.689+0000 3.5084734586888766e-2
2020-11-16T10:48:25.695+0000 1.9474563946729155
2020-11-16T10:48:25.700+0000 9.423175513609095
2020-11-16T10:48:25.705+0000 2.4871634825039015
2020-11-16T10:48:25.710+0000 2.8472676324820685
2020-11-16T10:48:25.715+0000 1.5999488876250578
2020-11-16T10:48:25.720+0000 -0.2693864675719999
2020-11-16T10:48:25.725+0000 1.6304414331783441
2020-11-16T10:48:25.731+0000 0.39324529792831353
2020-11-16T10:48:25.736+0000 0.4053253263569069
2020-11-16T10:48:25.741+0000 0.9270234970247857
2020-11-16T10:48:25.746+0000 1.4509585503273819
2020-11-16T10:48:25.751+0000 0.8878267401905819
2020-11-16T10:48:25.756+0000 1.1883024549090635
2020-11-16T10:48:25.761+0000 1.0163155722641077
2020-11-16T10:48:25.767+0000 -0.8003099498427713
2020-11-16T10:48:25.772+0000 -0.9483216075980454
2020-11-16T10:48:25.777+0000 1.0437451610964232
2020-11-16T10:48:25.782+0000 2.19837214407137
2020-11-16T10:48:25.787+0000 2.070797890483533
2020-11-16T10:48:25.792+0000 1.2067096088561005
2020-11-16T10:48:25.798+0000 0.5043809533024068
2020-11-16T10:48:25.803+0000 0.3683130512293926
2020-11-16T10:48:25.808+0000 1.0968506619209946
2020-11-16T10:48:35.887+0000 -0.6602896123630477
2020-11-16T10:48:35.892+0000 6.829641971377687e-2
2020-11-16T10:48:35.898+0000 1.5578597945995134
2020-11-16T10:48:35.903+0000 0.9822629073468155
2020-11-16T10:48:35.908+0000 -0.7900771590527182
2020-11-16T10:48:35.913+0000 1.1194124344742182
2020-11-16T10:48:35.918+0000 1.1239015052468448
2020-11-16T10:48:35.924+0000 1.9447892371838207
2020-11-16T10:48:35.929+0000 2.0854603958592985
2020-11-16T10:48:35.934+0000 0.17341117815802976
2020-11-16T10:48:35.939+0000 1.5971150699056031
2020-11-16T10:48:35.944+0000 0.35646629992342993
2020-11-16T10:48:35.950+0000 1.8107324499508701
2020-11-16T10:48:35.955+0000 3.463539114641669
2020-11-16T10:48:35.960+0000 0.8683263379823365
2020-11-16T10:48:35.965+0000 1.2642821462325637
2020-11-16T10:48:35.970+0000 1.0099560176390794
2020-11-16T10:48:35.975+0000 1.1930381560126895
2020-11-16T10:48:35.981+0000 0.5433757598192581
2020-11-16T10:48:35.986+0000 1.0213782743479625
2020-11-16T10:48:35.991+0000 1.5049231054950472
2020-11-16T10:48:35.996+0000 0.22101559200796428
2020-11-16T10:48:36.001+0000 1.8743753391414122
2020-11-16T10:48:36.006+0000 0.6050230742039573
2020-11-16T10:48:36.012+0000 0.6939669876285336
2020-11-16T10:48:36.017+0000 1.5379566524515602
2020-11-16T10:48:36.022+0000 -0.6869579758877387
2020-11-16T10:48:36.027+0000 -0.4823865565169676
2020-11-16T10:48:36.032+0000 2.577388594447341
2020-11-16T10:48:36.037+0000 0.9323745950234809
2020-11-16T10:48:36.043+0000 -0.25032440836547454
2020-11-16T10:48:36.048+0000 1.1141701800611599
2020-11-16T10:48:36.053+0000 1.1577408343996396
2020-11-16T10:48:36.058+0000 0.4735089125920344
2020-11-16T10:48:36.063+0000 -1.5559289264558278
2020-11-16T10:48:36.068+0000 -0.11080485473390023
2020-11-16T10:48:36.073+0000 0.1536430200356127
2020-11-16T10:48:36.079+0000 1.2851073161790278
2020-11-16T10:48:36.084+0000 -0.9717966387140513
2020-11-16T10:48:36.089+0000 0.4604981927819666
2020-11-16T10:48:36.094+0000 0.4825924627571432
2020-11-16T10:48:36.099+0000 1.8907687599342153
2020-11-16T10:48:36.104+0000 1.5027092114554406
2020-11-16T10:48:36.110+0000 0.4892227077808574
2020-11-16T10:48:36.115+0000 2.2742380779964306
2020-11-16T10:48:36.120+0000 5.93203161994782e-3
2020-11-16T10:48:36.125+0000 0.9357077683018076
2020-11-16T10:48:36.130+0000 1.6452901327178684
2020-11-16T10:48:36.136+0000 2.5989481778450294
2020-11-16T10:48:36.141+0000 3.1233030636814103
2020-11-16T10:48:36.146+0000 2.14412876458466
2020-11-16T10:48:36.151+0000 0.8645332371791754
2020-11-16T10:48:36.157+0000 1.7396751361758789
2020-11-16T10:48:36.163+0000 3.406726808728102
2020-11-16T10:48:36.169+0000 0.27592904706426413
2020-11-16T10:48:36.174+0000 -0.47288172874607715
2020-11-16T10:48:36.179+0000 3.1581200247451022
2020-11-16T10:48:36.184+0000 2.3502844371874003
2020-11-16T10:48:36.190+0000 2.3604518998272104
2020-11-16T10:48:36.195+0000 2.875582435906723
2020-11-16T10:48:36.200+0000 1.802101533727158
2020-11-16T10:48:36.205+0000 2.158082491464444
2020-11-16T10:48:36.210+0000 -0.5284223682158626
2020-11-16T10:48:36.216+0000 1.929919317533868e-2
2020-11-16T10:48:36.221+0000 1.948485504832782
2020-11-16T10:48:36.226+0000 0.49379467644006303
2020-11-16T10:48:36.231+0000 0.33811694243690293
2020-11-16T10:48:36.236+0000 1.332171769010618
2020-11-16T10:48:36.242+0000 0.6994701270153069
2020-11-16T10:48:36.247+0000 -0.413721820026016
2020-11-16T10:48:36.252+0000 -1.5522089380783108
2020-11-16T10:48:36.257+0000 2.161396170492705
2020-11-16T10:48:36.262+0000 2.333496950423164e-2
2020-11-16T10:48:36.268+0000 -0.10913840839170796
2020-11-16T10:48:36.273+0000 1.1299228472291496
2020-11-16T10:48:36.278+0000 2.4274358384176584
2020-11-16T10:48:36.283+0000 1.9359707345891741
2020-11-16T10:48:36.288+0000 3.487722218477596
2020-11-16T10:48:36.294+0000 0.9990127159196325
2020-11-16T10:48:36.299+0000 -1.0398429191328207
2020-11-16T10:48:36.304+0000 0.3005833334887211
2020-11-16T10:48:36.309+0000 -0.7334628100431295
2020-11-16T10:48:36.314+0000 0.4835865602253189
2020-11-16T10:48:36.320+0000 0.5246945471836175
2020-11-16T10:48:36.325+0000 0.8469783573593253
2020-11-16T10:48:36.330+0000 0.8359162587262456
2020-11-16T10:48:36.335+0000 0.7772016511976113
2020-11-16T10:48:36.340+0000 -0.39849883029666944
2020-11-16T10:48:36.345+0000 1.8703097604547239
2020-11-16T10:48:36.350+0000 2.682932324516024
2020-11-16T10:48:36.356+0000 0.46996888720103236
2020-11-16T10:48:36.361+0000 -7.881388366585762e-2
2020-11-16T10:48:36.366+0000 2.1043645061434084
2020-11-16T10:48:36.371+0000 0.6195230903468327
2020-11-16T10:48:36.376+0000 -0.23170755440676594
2020-11-16T10:48:36.381+0000 0.3918168388047796
2020-11-16T10:48:36.386+0000 0.22086080450987344
2020-11-16T10:48:36.392+0000 1.5182059037248368
2020-11-16T10:48:36.397+0000 1.6442851975073318
2020-11-16T10:48:36.402+0000 0.3979663516003099
2020-11-16T10:48:42.690+0000 2.0531657985840983
2020-11-16T10:48:42.696+0000 1.7928797637680196
2020-11-16T10:48:42.701+0000 2.9329556976986013
2020-11-16T10:48:42.706+0000 1.1087520027663345
2020-11-16T10:48:42.711+0000 1.2115868818351045
2020-11-16T10:48:42.716+0000 1.9163661519192294
2020-11-16T10:48:42.722+0000 1.6917128257752045
2020-11-16T10:48:42.727+0000 1.0095879056962782
2020-11-16T10:48:42.732+0000 -0.13611276130309613
2020-11-16T10:48:42.737+0000 2.2939319088848023
2020-11-16T10:48:42.742+0000 1.0723690693732042
2020-11-16T10:48:42.748+0000 2.1452154961792393
2020-11-16T10:48:42.753+0000 0.7259078662420231
2020-11-16T10:48:42.758+0000 2.6599123456452727
2020-11-16T10:48:42.763+0000 0.2519779820647646
2020-11-16T10:48:42.768+0000 2.1670014817546175
2020-11-16T10:48:42.773+0000 0.10506784220981513
2020-11-16T10:48:42.779+0000 2.018185302480656
2020-11-16T10:48:42.784+0000 1.1665983169452525
2020-11-16T10:48:42.789+0000 0.33284879429952463
2020-11-16T10:48:42.794+0000 0.3531339079979545
2020-11-16T10:48:42.799+0000 2.1004784012229245
2020-11-16T10:48:42.805+0000 1.282680965361929
2020-11-16T10:48:42.810+0000 1.2270715852857979
2020-11-16T10:48:42.815+0000 0.858598096986649
2020-11-16T10:48:42.820+0000 2.5040344133072407
2020-11-16T10:48:42.825+0000 1.6541952933075013
2020-11-16T10:48:42.831+0000 0.5329588210461834
2020-11-16T10:48:42.836+0000 2.1274892552565134
2020-11-16T10:48:42.841+0000 1.4668875035709574
2020-11-16T10:48:42.846+0000 1.5382758818248594
2020-11-16T10:48:42.851+0000 1.7428172106530586
2020-11-16T10:48:42.856+0000 1.4727771685178368
2020-11-16T10:48:42.861+0000 1.6023481462981235
2020-11-16T10:48:42.867+0000 1.6577898477375492
2020-11-16T10:48:42.872+0000 5.892056976555449e-2
2020-11-16T10:48:42.877+0000 2.7754262543475523
2020-11-16T10:48:42.882+0000 1.2200523142327606
2020-11-16T10:48:42.887+0000 1.5903756890326521
2020-11-16T10:48:42.893+0000 -1.49547625208842
2020-11-16T10:48:42.898+0000 0.8523817097750093
2020-11-16T10:48:42.903+0000 0.5057853403549346
2020-11-16T10:48:42.908+0000 0.5683629007876065
2020-11-16T10:48:42.913+0000 1.6479513379049497
2020-11-16T10:48:42.918+0000 1.2148679515188867
2020-11-16T10:48:42.924+0000 0.6222019509815193
2020-11-16T10:48:42.929+0000 1.3255067306263184
2020-11-16T10:48:42.934+0000 0.4983375954130155
2020-11-16T10:48:42.939+0000 -8.802709440091383e-2
2020-11-16T10:48:42.944+0000 0.13831985322805507
2020-11-16T10:48:42.949+0000 -0.5487242466777436
2020-11-16T10:48:42.954+0000 -0.32058114510029334
2020-11-16T10:48:42.960+0000 1.8950590840214767
2020-11-16T10:48:42.965+0000 1.0062190610750874
2020-11-16T10:48:42.971+0000 -0.9934439161367286
2020-11-16T10:48:42.976+0000 0.3671557383587293
2020-11-16T10:48:42.981+0000 0.19986189782147756
2020-11-16T10:48:42.986+0000 -0.49653972053539497
2020-11-16T10:48:42.991+0000 0.6848255848767759
2020-11-16T10:48:42.996+0000 1.5219606199148406
2020-11-16T10:48:43.002+0000 1.455086538348867
2020-11-16T10:48:43.007+0000 2.883109155648917
2020-11-16T10:48:43.012+0000 1.8164694435868296
2020-11-16T10:48:43.017+0000 0.6742710281863775
2020-11-16T10:48:43.022+0000 0.5441958963393487
2020-11-16T10:48:43.027+0000 1.0517397813571259
2020-11-16T10:48:43.033+0000 0.8356831003190489
2020-11-16T10:48:43.038+0000 0.8227690076487093
2020-11-16T10:48:43.043+0000 1.4570119880481842
2020-11-16T10:48:43.048+0000 -0.297581775651637
2020-11-16T10:48:43.053+0000 -7.206180041345078e-2
2020-11-16T10:48:43.059+0000 -0.8739444049086391
2020-11-16T10:48:43.064+0000 2.2604530979343074
2020-11-16T10:48:43.069+0000 2.3872947344763027
2020-11-16T10:48:43.074+0000 3.3685772895980124
2020-11-16T10:48:43.079+0000 2.013534739447639
2020-11-16T10:48:43.085+0000 3.368251328412311
2020-11-16T10:48:43.090+0000 0.8953451648220483
2020-11-16T10:48:43.095+0000 9.545874578601765e-2
2020-11-16T10:48:43.100+0000 0.7718477167244377
2020-11-16T10:48:43.105+0000 1.0629106168204554
2020-11-16T10:48:43.110+0000 0.5518190802821734
2020-11-16T10:48:43.116+0000 2.9939679918505853
2020-11-16T10:48:43.121+0000 1.8726021041818661
2020-11-16T10:48:43.126+0000 0.2653885457840085
2020-11-16T10:48:43.131+0000 1.9872672471653996
2020-11-16T10:48:43.136+0000 -0.553166557898946
2020-11-16T10:48:43.141+0000 1.5640591286122745
2020-11-16T10:48:43.147+0000 2.52680639118602
2020-11-16T10:48:43.152+0000 1.80742439492357
2020-11-16T10:48:43.157+0000 2.1955997975781347
2020-11-16T10:48:43.162+0000 0.5980285235875027
2020-11-16T10:48:43.167+0000 -0.2658797956060317
2020-11-16T10:48:43.172+0000 -0.49719135472382137
2020-11-16T10:48:43.178+0000 1.180607461695498
2020-11-16T10:48:43.183+0000 -0.10430878902480734
2020-11-16T10:48:43.188+0000 0.823892717854915
2020-11-16T10:48:43.193+0000 1.666382974377688
2020-11-16T10:48:43.198+0000 3.748395965408928
2020-11-16T10:48:43.204+0000 1.7921581120532326e-2
import org.apache.spark.sql.functions._

// Start running the query that prints the running counts to the console
val query = csvStreamingDF
                 // bround simply rounds the double to the desired decimal place - 0 in our case here. 
                   // see https://spark.apache.org/docs/latest/api/java/org/apache/spark/sql/functions.html#bround-org.apache.spark.sql.Column-
                   // we are using bround to simply coarsen out data into bins for counts
                 .select(bround($"score", 0).as("binnedScore")) 
                 .groupBy($"binnedScore")
                 .agg(count($"binnedScore") as "binnedScoreCounts")
                 .orderBy($"binnedScore")
                 .writeStream
                 .outputMode("complete")
                 .format("console")
                 .start()
                 
query.awaitTermination() // hit cancel to terminate
-------------------------------------------
Batch: 0
-------------------------------------------
+-----------+-----------------+
|binnedScore|binnedScoreCounts|
+-----------+-----------------+
|       -1.0|                9|
|        0.0|               18|
|        1.0|               41|
|        2.0|               25|
|        3.0|                5|
|        4.0|                1|
|       10.0|                1|
+-----------+-----------------+

-------------------------------------------
Batch: 1
-------------------------------------------
+-----------+-----------------+
|binnedScore|binnedScoreCounts|
+-----------+-----------------+
|       -2.0|                1|
|       -1.0|               13|
|        0.0|               44|
|        1.0|               83|
|        2.0|               46|
|        3.0|               10|
|        4.0|                1|
|       10.0|                1|
|       12.0|                1|
+-----------+-----------------+

-------------------------------------------
Batch: 2
-------------------------------------------
+-----------+-----------------+
|binnedScore|binnedScoreCounts|
+-----------+-----------------+
|       -2.0|                2|
|       -1.0|               20|
|        0.0|               74|
|        1.0|              118|
|        2.0|               70|
|        3.0|               12|
|        4.0|                1|
|        9.0|                1|
|       10.0|                1|
|       12.0|                1|
+-----------+-----------------+

-------------------------------------------
Batch: 3
-------------------------------------------
+-----------+-----------------+
|binnedScore|binnedScoreCounts|
+-----------+-----------------+
|       -2.0|                4|
|       -1.0|               27|
|        0.0|              104|
|        1.0|              144|
|        2.0|               96|
|        3.0|               21|
|        4.0|                1|
|        9.0|                1|
|       10.0|                1|
|       12.0|                1|
+-----------+-----------------+

-------------------------------------------
Batch: 4
-------------------------------------------
+-----------+-----------------+
|binnedScore|binnedScoreCounts|
+-----------+-----------------+
|       -2.0|                4|
|       -1.0|               32|
|        0.0|              125|
|        1.0|              179|
|        2.0|              125|
|        3.0|               30|
|        4.0|                2|
|        9.0|                1|
|       10.0|                1|
|       12.0|                1|
+-----------+-----------------+

Once the above streaming job has processed all the files in the directory, it will continue to "listen" in for new files in the directory. You could for example return to the other notebook 037b_Mix2NormalsStructStreamingFiles and rerun the cell that writes another lot of newer files into the directory and return to this notebook to watch the above streaming job continue with additional batches.

Static and Streaming DataSets

These examples generate streaming DataFrames that are untyped, meaning that the schema of the DataFrame is not checked at compile time, only checked at runtime when the query is submitted. Some operations like map, flatMap, etc. need the type to be known at compile time. To do those, you can convert these untyped streaming DataFrames to typed streaming Datasets using the same methods as static DataFrame. See the SQL Programming Guide for more details. Additionally, more details on the supported streaming sources are discussed later in the document.

Let us make a dataset version of the streaming dataframe.

But first let us try it make the datset from the static dataframe and then apply it to the streming dataframe.

csvStaticDF.printSchema // schema of the static DF
root
 |-- time: timestamp (nullable = true)
 |-- score: double (nullable = true)
import org.apache.spark.sql.types._
import java.sql.Timestamp

// create a case class to make the datset
case class timedScores(time: Timestamp, score: Double)

val csvStaticDS = csvStaticDF.as[timedScores] // create a dataset from the dataframe
import org.apache.spark.sql.types._
import java.sql.Timestamp
defined class timedScores
csvStaticDS: org.apache.spark.sql.Dataset[timedScores] = [time: timestamp, score: double]
csvStaticDS.show(5,false) // looks like we got the dataset we want with strong typing
+-----------------------+--------------------+
|time                   |score               |
+-----------------------+--------------------+
|2020-11-16 10:48:25.294|0.21791376679544772 |
|2020-11-16 10:48:25.299|0.011291967445604012|
|2020-11-16 10:48:25.304|-0.30293144696154806|
|2020-11-16 10:48:25.309|0.4303254534802833  |
|2020-11-16 10:48:25.314|1.5521304466388752  |
+-----------------------+--------------------+
only showing top 5 rows

Now let us use the same code for making a streaming dataset.

import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
import java.sql.Timestamp

// create a case class to make the datset
case class timedScores(time: Timestamp, score: Double)

val csvStreamingDS = csvStreamingDF.as[timedScores] // create a dataset from the dataframe

// Start running the query that prints the running counts to the console
val query = csvStreamingDS
                  // bround simply rounds the double to the desired decimal place - 0 in our case here. 
                   // see https://spark.apache.org/docs/latest/api/java/org/apache/spark/sql/functions.html#bround-org.apache.spark.sql.Column-
                   // we are using bround to simply coarsen out data into bins for counts
                 .select(bround($"score", 0).as("binnedScore")) 
                 .groupBy($"binnedScore")
                 .agg(count($"binnedScore") as "binnedScoreCounts")
                 .orderBy($"binnedScore")
                 .writeStream
                 .outputMode("complete")
                 .format("console")
                 .start()

query.awaitTermination() // hit cancel to terminate
-------------------------------------------
Batch: 0
-------------------------------------------
+-----------+-----------------+
|binnedScore|binnedScoreCounts|
+-----------+-----------------+
|       -1.0|                9|
|        0.0|               18|
|        1.0|               41|
|        2.0|               25|
|        3.0|                5|
|        4.0|                1|
|       10.0|                1|
+-----------+-----------------+

-------------------------------------------
Batch: 1
-------------------------------------------
+-----------+-----------------+
|binnedScore|binnedScoreCounts|
+-----------+-----------------+
|       -2.0|                1|
|       -1.0|               13|
|        0.0|               44|
|        1.0|               83|
|        2.0|               46|
|        3.0|               10|
|        4.0|                1|
|       10.0|                1|
|       12.0|                1|
+-----------+-----------------+

-------------------------------------------
Batch: 2
-------------------------------------------
+-----------+-----------------+
|binnedScore|binnedScoreCounts|
+-----------+-----------------+
|       -2.0|                2|
|       -1.0|               20|
|        0.0|               74|
|        1.0|              118|
|        2.0|               70|
|        3.0|               12|
|        4.0|                1|
|        9.0|                1|
|       10.0|                1|
|       12.0|                1|
+-----------+-----------------+

-------------------------------------------
Batch: 3
-------------------------------------------
+-----------+-----------------+
|binnedScore|binnedScoreCounts|
+-----------+-----------------+
|       -2.0|                4|
|       -1.0|               27|
|        0.0|              104|
|        1.0|              144|
|        2.0|               96|
|        3.0|               21|
|        4.0|                1|
|        9.0|                1|
|       10.0|                1|
|       12.0|                1|
+-----------+-----------------+

-------------------------------------------
Batch: 4
-------------------------------------------
+-----------+-----------------+
|binnedScore|binnedScoreCounts|
+-----------+-----------------+
|       -2.0|                4|
|       -1.0|               32|
|        0.0|              125|
|        1.0|              179|
|        2.0|              125|
|        3.0|               30|
|        4.0|                2|
|        9.0|                1|
|       10.0|                1|
|       12.0|                1|
+-----------+-----------------+

Window Operations on Event Time

Aggregations over a sliding event-time window are straightforward with Structured Streaming and are very similar to grouped aggregations. In a grouped aggregation, aggregate values (e.g. counts) are maintained for each unique value in the user-specified grouping column. In case of window-based aggregations, aggregate values are maintained for each window the event-time of a row falls into. Let’s understand this with an illustration.

Imagine our quick example is modified and the stream now contains lines along with the time when the line was generated. Instead of running word counts, we want to count words within 10 minute windows, updating every 5 minutes. That is, word counts in words received between 10 minute windows 12:00 - 12:10, 12:05 - 12:15, 12:10 - 12:20, etc. Note that 12:00 - 12:10 means data that arrived after 12:00 but before 12:10. Now, consider a word that was received at 12:07. This word should increment the counts corresponding to two windows 12:00 - 12:10 and 12:05 - 12:15. So the counts will be indexed by both, the grouping key (i.e. the word) and the window (can be calculated from the event-time).

The result tables would look something like the following.

Window Operations

Since this windowing is similar to grouping, in code, you can use groupBy() and window() operations to express windowed aggregations. You can see the full code for the below examples in Scala/Java/Python.

Make sure the streaming job with animal names is running (or finished running) with files in /datasets/streamingFiles directory - this is the Quick Example in 037a_FilesForStructuredStreaming notebook.

display(dbutils.fs.ls("/datasets/streamingFiles"))
spark.read.format("text").load("/datasets/streamingFiles").show(5,false) // let's just read five  entries
+----------------------------------+
|value                             |
+----------------------------------+
|2020-11-16 10:30:04+00:00; bat rat|
|2020-11-16 10:30:06+00:00; rat bat|
|2020-11-16 10:30:08+00:00; rat dog|
|2020-11-16 10:30:10+00:00; rat cat|
|2020-11-16 10:30:12+00:00; cat bat|
+----------------------------------+
only showing top 5 rows
import spark.implicits._
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import java.sql.Timestamp
spark.sql("set spark.sql.legacy.timeParserPolicy=LEGACY")

// a static DS is convenient to work with
val csvStaticDS = spark
   .read
   .option("sep", ";") // delimiter is ';'
   .csv("/datasets/streamingFiles/*.log")    // Equivalent to format("csv").load("/path/to/directory")
   .toDF("time","animals")
   .select(unix_timestamp($"time", "yyyy-MM-dd HH:mm:ss").cast(TimestampType).as("timestamp"), $"animals")
   .as[(Timestamp, String)]
   .flatMap(
     line => line._2.split(" ")
                 .filter(_ != "") // remove empty string from leading whitespace
                 .map(animal => (line._1, animal))
    )
   .toDF("timestamp", "animal")
   .as[(Timestamp, String)]
   
import spark.implicits._
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import java.sql.Timestamp
csvStaticDS: org.apache.spark.sql.Dataset[(java.sql.Timestamp, String)] = [timestamp: timestamp, animal: string]
csvStaticDS.show(5,false)
+-------------------+------+
|timestamp          |animal|
+-------------------+------+
|2020-11-16 11:00:01|owl   |
|2020-11-16 11:00:01|cat   |
|2020-11-16 11:00:03|dog   |
|2020-11-16 11:00:03|pig   |
|2020-11-16 11:00:05|rat   |
+-------------------+------+
only showing top 5 rows
//make a user-specified schema for structured streaming
val userSchema = new StructType()
                      .add("time", "String") // we will read it as String and then convert into timestamp later
                      .add("animals", "String")

// streaming DS
val csvStreamingDS = spark
// the next three lines are needed for structured streaming from file streams
  .readStream // for streaming
  .option("MaxFilesPerTrigger", 1) //  for streaming
  .schema(userSchema) // for streaming
  .option("sep", ";") // delimiter is ';'
  .csv("/datasets/streamingFiles/*.log")    // Equivalent to format("csv").load("/path/to/directory")
  .toDF("time","animals")
  .select(unix_timestamp($"time", "yyyy-MM-dd HH:mm:ss").cast(TimestampType).as("timestamp"), $"animals")
  //.toDF("time","animals")
  .as[(Timestamp, String)]
  .flatMap(
     line => line._2.split(" ").map(animal => (line._1, animal))
    )
  .filter(_._2 != "")
  .toDF("timestamp", "animal")
  .as[(Timestamp, String)]
userSchema: org.apache.spark.sql.types.StructType = StructType(StructField(time,StringType,true), StructField(animals,StringType,true))
csvStreamingDS: org.apache.spark.sql.Dataset[(java.sql.Timestamp, String)] = [timestamp: timestamp, animal: string]
display(csvStreamingDS) // evaluate to see the animal words with timestamps streaming in
timestamp animal
2020-11-16T10:30:08.000+0000 rat
2020-11-16T10:30:08.000+0000 dog
2020-11-16T10:34:23.000+0000 bat
2020-11-16T10:34:23.000+0000 pig
2020-11-16T10:45:59.000+0000 cat
2020-11-16T10:45:59.000+0000 bat
2020-11-16T10:47:54.000+0000 pig
2020-11-16T10:47:54.000+0000 dog
2020-11-16T10:50:05.000+0000 dog
2020-11-16T10:50:05.000+0000 pig
2020-11-16T10:54:10.000+0000 bat
2020-11-16T10:54:10.000+0000 rat
2020-11-16T10:58:06.000+0000 rat
2020-11-16T10:58:06.000+0000 bat
2020-11-16T10:30:54.000+0000 cat
2020-11-16T10:30:54.000+0000 pig
2020-11-16T10:31:17.000+0000 dog
2020-11-16T10:31:17.000+0000 rat
2020-11-16T10:32:05.000+0000 cat
2020-11-16T10:32:05.000+0000 dog
2020-11-16T10:35:07.000+0000 pig
2020-11-16T10:35:07.000+0000 rat
2020-11-16T10:35:55.000+0000 rat
2020-11-16T10:35:55.000+0000 pig
2020-11-16T10:37:10.000+0000 dog
2020-11-16T10:37:10.000+0000 cat
2020-11-16T10:38:58.000+0000 owl
2020-11-16T10:38:58.000+0000 dog
2020-11-16T10:41:57.000+0000 rat
2020-11-16T10:41:57.000+0000 bat
2020-11-16T10:45:25.000+0000 dog
2020-11-16T10:45:25.000+0000 bat
2020-11-16T10:45:43.000+0000 pig
2020-11-16T10:45:43.000+0000 owl
2020-11-16T10:47:16.000+0000 rat
2020-11-16T10:47:16.000+0000 dog
2020-11-16T10:53:52.000+0000 rat
2020-11-16T10:53:52.000+0000 dog
2020-11-16T10:55:12.000+0000 dog
2020-11-16T10:55:12.000+0000 pig
2020-11-16T10:32:01.000+0000 owl
2020-11-16T10:32:01.000+0000 bat
2020-11-16T10:32:25.000+0000 cat
2020-11-16T10:32:25.000+0000 pig
2020-11-16T10:33:29.000+0000 rat
2020-11-16T10:33:29.000+0000 bat
2020-11-16T10:34:01.000+0000 pig
2020-11-16T10:34:01.000+0000 cat
2020-11-16T10:34:37.000+0000 dog
2020-11-16T10:34:37.000+0000 pig
2020-11-16T10:42:11.000+0000 pig
2020-11-16T10:42:11.000+0000 cat
2020-11-16T10:42:51.000+0000 pig
2020-11-16T10:42:51.000+0000 rat
2020-11-16T10:49:06.000+0000 pig
2020-11-16T10:49:06.000+0000 cat
// Group the data by window and word and compute the count of each group
val windowDuration = "180 seconds"
val slideDuration = "90 seconds"
val windowedCounts = csvStreamingDS.groupBy(
      window($"timestamp", windowDuration, slideDuration), $"animal"
    ).count().orderBy("window")

// Start running the query that prints the windowed word counts to the console
val query = windowedCounts.writeStream
      .outputMode("complete")
      .format("console")
      .option("truncate", "false")
      .start()

query.awaitTermination()
-------------------------------------------
Batch: 0
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
+------------------------------------------+------+-----+

-------------------------------------------
Batch: 1
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
+------------------------------------------+------+-----+

-------------------------------------------
Batch: 2
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|bat   |1    |
+------------------------------------------+------+-----+

-------------------------------------------
Batch: 3
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|pig   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|dog   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|cat   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|dog   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|pig   |1    |
+------------------------------------------+------+-----+

-------------------------------------------
Batch: 4
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|pig   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|dog   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|cat   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|dog   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|pig   |1    |
|[2020-11-16 10:48:00, 2020-11-16 10:51:00]|pig   |1    |
|[2020-11-16 10:48:00, 2020-11-16 10:51:00]|dog   |1    |
|[2020-11-16 10:49:30, 2020-11-16 10:52:30]|dog   |1    |
|[2020-11-16 10:49:30, 2020-11-16 10:52:30]|pig   |1    |
+------------------------------------------+------+-----+

-------------------------------------------
Batch: 5
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|dog   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|pig   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|dog   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|pig   |1    |
|[2020-11-16 10:48:00, 2020-11-16 10:51:00]|pig   |1    |
|[2020-11-16 10:48:00, 2020-11-16 10:51:00]|dog   |1    |
|[2020-11-16 10:49:30, 2020-11-16 10:52:30]|pig   |1    |
|[2020-11-16 10:49:30, 2020-11-16 10:52:30]|dog   |1    |
+------------------------------------------+------+-----+
only showing top 20 rows

-------------------------------------------
Batch: 6
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|pig   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|dog   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|cat   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|dog   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|pig   |1    |
|[2020-11-16 10:48:00, 2020-11-16 10:51:00]|pig   |1    |
|[2020-11-16 10:48:00, 2020-11-16 10:51:00]|dog   |1    |
|[2020-11-16 10:49:30, 2020-11-16 10:52:30]|pig   |1    |
|[2020-11-16 10:49:30, 2020-11-16 10:52:30]|dog   |1    |
+------------------------------------------+------+-----+
only showing top 20 rows

-------------------------------------------
Batch: 7
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|pig   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|cat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|cat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|pig   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|pig   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|dog   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|cat   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|dog   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|pig   |1    |
+------------------------------------------+------+-----+
only showing top 20 rows

Handling Late Data and Watermarking

Now consider what happens if one of the events arrives late to the application. For example, say, a word generated at 12:04 (i.e. event time) could be received by the application at 12:11. The application should use the time 12:04 instead of 12:11 to update the older counts for the window 12:00 - 12:10. This occurs naturally in our window-based grouping – Structured Streaming can maintain the intermediate state for partial aggregates for a long period of time such that late data can update aggregates of old windows correctly, as illustrated below.

Handling Late Data

However, to run this query for days, it’s necessary for the system to bound the amount of intermediate in-memory state it accumulates. This means the system needs to know when an old aggregate can be dropped from the in-memory state because the application is not going to receive late data for that aggregate any more. To enable this, in Spark 2.1, we have introduced watermarking, which lets the engine automatically track the current event time in the data and attempt to clean up old state accordingly. You can define the watermark of a query by specifying the event time column and the threshold on how late the data is expected to be in terms of event time. For a specific window starting at time T, the engine will maintain state and allow late data to update the state until (max event time seen by the engine - late threshold > T). In other words, late data within the threshold will be aggregated, but data later than the threshold will be dropped. Let’s understand this with an example. We can easily define watermarking on the previous example using withWatermark() as shown below.

// Group the data by window and word and compute the count of each group
val windowDuration = "180 seconds"
val slideDuration = "90 seconds"
val watermarkDuration = "10 minutes"
val windowedCounts = csvStreamingDS
     .withWatermark("timestamp", watermarkDuration)
     .groupBy(
      window($"timestamp", windowDuration, slideDuration), $"animal"
    ).count().orderBy("window")

// Start running the query that prints the windowed word counts to the console
val query = windowedCounts.writeStream
      .outputMode("complete")
      .format("console")
      .option("truncate", "false")
      .start()

query.awaitTermination()
-------------------------------------------
Batch: 0
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
+------------------------------------------+------+-----+

-------------------------------------------
Batch: 1
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
+------------------------------------------+------+-----+

-------------------------------------------
Batch: 2
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|bat   |1    |
+------------------------------------------+------+-----+

-------------------------------------------
Batch: 3
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|pig   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|dog   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|cat   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|pig   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|dog   |1    |
+------------------------------------------+------+-----+

-------------------------------------------
Batch: 4
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|dog   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|pig   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|bat   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|dog   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|pig   |1    |
|[2020-11-16 10:48:00, 2020-11-16 10:51:00]|dog   |1    |
|[2020-11-16 10:48:00, 2020-11-16 10:51:00]|pig   |1    |
|[2020-11-16 10:49:30, 2020-11-16 10:52:30]|dog   |1    |
|[2020-11-16 10:49:30, 2020-11-16 10:52:30]|pig   |1    |
+------------------------------------------+------+-----+

-------------------------------------------
Batch: 5
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|dog   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|pig   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|bat   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|dog   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|pig   |1    |
|[2020-11-16 10:48:00, 2020-11-16 10:51:00]|dog   |1    |
|[2020-11-16 10:48:00, 2020-11-16 10:51:00]|pig   |1    |
|[2020-11-16 10:49:30, 2020-11-16 10:52:30]|dog   |1    |
|[2020-11-16 10:49:30, 2020-11-16 10:52:30]|pig   |1    |
+------------------------------------------+------+-----+
only showing top 20 rows

-------------------------------------------
Batch: 6
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|pig   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|dog   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|cat   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|pig   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|dog   |1    |
|[2020-11-16 10:48:00, 2020-11-16 10:51:00]|pig   |1    |
|[2020-11-16 10:48:00, 2020-11-16 10:51:00]|dog   |1    |
|[2020-11-16 10:49:30, 2020-11-16 10:52:30]|pig   |1    |
|[2020-11-16 10:49:30, 2020-11-16 10:52:30]|dog   |1    |
+------------------------------------------+------+-----+
only showing top 20 rows

-------------------------------------------
Batch: 7
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|pig   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|cat   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|pig   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|cat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|pig   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|dog   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|cat   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|pig   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|dog   |1    |
+------------------------------------------+------+-----+
only showing top 20 rows

-------------------------------------------
Batch: 8
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |2    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |2    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|pig   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|cat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|cat   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |2    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|pig   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |2    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|pig   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|dog   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|dog   |1    |
|[2020-11-16 10:46:30, 2020-11-16 10:49:30]|pig   |1    |
+------------------------------------------+------+-----+
only showing top 20 rows

-------------------------------------------
Batch: 9
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |2    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |2    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|cat   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|pig   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|cat   |2    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |2    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|pig   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |3    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|dog   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|cat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|pig   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|bat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|dog   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|cat   |1    |
+------------------------------------------+------+-----+
only showing top 20 rows

-------------------------------------------
Batch: 10
-------------------------------------------
+------------------------------------------+------+-----+
|window                                    |animal|count|
+------------------------------------------+------+-----+
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|dog   |2    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|rat   |2    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|cat   |1    |
|[2020-11-16 10:28:30, 2020-11-16 10:31:30]|pig   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|cat   |2    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|pig   |1    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|rat   |2    |
|[2020-11-16 10:30:00, 2020-11-16 10:33:00]|dog   |3    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|bat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|pig   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|cat   |1    |
|[2020-11-16 10:31:30, 2020-11-16 10:34:30]|dog   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|bat   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|rat   |1    |
|[2020-11-16 10:33:00, 2020-11-16 10:36:00]|pig   |2    |
|[2020-11-16 10:34:30, 2020-11-16 10:37:30]|rat   |1    |
|[2020-11-16 10:34:30, 2020-11-16 10:37:30]|pig   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|bat   |1    |
|[2020-11-16 10:43:30, 2020-11-16 10:46:30]|cat   |1    |
|[2020-11-16 10:45:00, 2020-11-16 10:48:00]|pig   |1    |
+------------------------------------------+------+-----+
only showing top 20 rows

In this example, we are defining the watermark of the query on the value of the column “timestamp”, and also defining “10 minutes” as the threshold of how late is the data allowed to be. If this query is run in Update output mode (discussed later in Output Modes section), the engine will keep updating counts of a window in the Result Table until the window is older than the watermark, which lags behind the current event time in column “timestamp” by 10 minutes. Here is an illustration.

Watermarking in Update Mode

As shown in the illustration, the maximum event time tracked by the engine is the blue dashed line, and the watermark set as (max event time - '10 mins') at the beginning of every trigger is the red line For example, when the engine observes the data (12:14, dog), it sets the watermark for the next trigger as 12:04. This watermark lets the engine maintain intermediate state for additional 10 minutes to allow late data to be counted. For example, the data (12:09, cat) is out of order and late, and it falls in windows 12:05 - 12:15 and 12:10 - 12:20. Since, it is still ahead of the watermark 12:04 in the trigger, the engine still maintains the intermediate counts as state and correctly updates the counts of the related windows. However, when the watermark is updated to 12:11, the intermediate state for window (12:00 - 12:10) is cleared, and all subsequent data (e.g. (12:04, donkey)) is considered “too late” and therefore ignored. Note that after every trigger, the updated counts (i.e. purple rows) are written to sink as the trigger output, as dictated by the Update mode.

Some sinks (e.g. files) may not supported fine-grained updates that Update Mode requires. To work with them, we have also support Append Mode, where only the final counts are written to sink. This is illustrated below.

Note that using withWatermark on a non-streaming Dataset is no-op. As the watermark should not affect any batch query in any way, we will ignore it directly.

Watermarking in Append Mode

Similar to the Update Mode earlier, the engine maintains intermediate counts for each window. However, the partial counts are not updated to the Result Table and not written to sink. The engine waits for “10 mins” for late date to be counted, then drops intermediate state of a window < watermark, and appends the final counts to the Result Table/sink. For example, the final counts of window 12:00 - 12:10 is appended to the Result Table only after the watermark is updated to 12:11.

Conditions for watermarking to clean aggregation state It is important to note that the following conditions must be satisfied for the watermarking to clean the state in aggregation queries (as of Spark 2.1.1, subject to change in the future).

  • Output mode must be Append or Update. Complete mode requires all aggregate data to be preserved, and hence cannot use watermarking to drop intermediate state. See the Output Modes section for detailed explanation of the semantics of each output mode.

  • The aggregation must have either the event-time column, or a window on the event-time column.

  • withWatermark must be called on the same column as the timestamp column used in the aggregate. For example, df.withWatermark("time", "1 min").groupBy("time2").count() is invalid in Append output mode, as watermark is defined on a different column from the aggregation column.

  • withWatermark must be called before the aggregation for the watermark details to be used. For example, df.groupBy("time").count().withWatermark("time", "1 min") is invalid in Append output mode.

Join Operations

Streaming DataFrames can be joined with static DataFrames to create new streaming DataFrames. Here are a few examples.

val staticDf = spark.read. ...
val streamingDf = spark.readStream. ...

streamingDf.join(staticDf, "type")          // inner equi-join with a static DF
streamingDf.join(staticDf, "type", "right_join")  // right outer join with a static DF

Streaming Deduplication

You can deduplicate records in data streams using a unique identifier in the events. This is exactly same as deduplication on static using a unique identifier column. The query will store the necessary amount of data from previous records such that it can filter duplicate records. Similar to aggregations, you can use deduplication with or without watermarking.

  • With watermark - If there is a upper bound on how late a duplicate record may arrive, then you can define a watermark on a event time column and deduplicate using both the guid and the event time columns. The query will use the watermark to remove old state data from past records that are not expected to get any duplicates any more. This bounds the amount of the state the query has to maintain.

  • Without watermark - Since there are no bounds on when a duplicate record may arrive, the query stores the data from all the past records as state.

    val streamingDf = spark.readStream. ...  // columns: guid, eventTime, ...

    // Without watermark using guid column
    streamingDf.dropDuplicates("guid")

    // With watermark using guid and eventTime columns
    streamingDf
      .withWatermark("eventTime", "10 seconds")
      .dropDuplicates("guid", "eventTime")

Arbitrary Stateful Operations

Many uscases require more advanced stateful operations than aggregations. For example, in many usecases, you have to track sessions from data streams of events. For doing such sessionization, you will have to save arbitrary types of data as state, and perform arbitrary operations on the state using the data stream events in every trigger. Since Spark 2.2, this can be done using the operation mapGroupsWithState and the more powerful operation flatMapGroupsWithState. Both operations allow you to apply user-defined code on grouped Datasets to update user-defined state. For more concrete details, take a look at the API documentation (Scala/Java) and the examples (Scala/Java).

Unsupported Operations

There are a few DataFrame/Dataset operations that are not supported with streaming DataFrames/Datasets. Some of them are as follows.

  • Multiple streaming aggregations (i.e. a chain of aggregations on a streaming DF) are not yet supported on streaming Datasets.

  • Limit and take first N rows are not supported on streaming Datasets.

  • Distinct operations on streaming Datasets are not supported.

  • Sorting operations are supported on streaming Datasets only after an aggregation and in Complete Output Mode.

  • Outer joins between a streaming and a static Datasets are conditionally supported.

    • Full outer join with a streaming Dataset is not supported

    • Left outer join with a streaming Dataset on the right is not supported

    • Right outer join with a streaming Dataset on the left is not supported

  • Any kind of joins between two streaming Datasets is not yet supported.

In addition, there are some Dataset methods that will not work on streaming Datasets. They are actions that will immediately run queries and return results, which does not make sense on a streaming Dataset. Rather, those functionalities can be done by explicitly starting a streaming query (see the next section regarding that).

  • count() - Cannot return a single count from a streaming Dataset. Instead, use ds.groupBy().count() which returns a streaming Dataset containing a running count.

  • foreach() - Instead use ds.writeStream.foreach(...) (see next section).

  • show() - Instead use the console sink (see next section).

If you try any of these operations, you will see an AnalysisException like “operation XYZ is not supported with streaming DataFrames/Datasets”. While some of them may be supported in future releases of Spark, there are others which are fundamentally hard to implement on streaming data efficiently. For example, sorting on the input stream is not supported, as it requires keeping track of all the data received in the stream. This is therefore fundamentally hard to execute efficiently.

Starting Streaming Queries

Once you have defined the final result DataFrame/Dataset, all that is left is for you to start the streaming computation. To do that, you have to use the DataStreamWriter (Scala/Java/Python docs) returned through Dataset.writeStream(). You will have to specify one or more of the following in this interface.

  • Details of the output sink: Data format, location, etc.

  • Output mode: Specify what gets written to the output sink.

  • Query name: Optionally, specify a unique name of the query for identification.

  • Trigger interval: Optionally, specify the trigger interval. If it is not specified, the system will check for availability of new data as soon as the previous processing has completed. If a trigger time is missed because the previous processing has not completed, then the system will attempt to trigger at the next trigger point, not immediately after the processing has completed.

  • Checkpoint location: For some output sinks where the end-to-end fault-tolerance can be guaranteed, specify the location where the system will write all the checkpoint information. This should be a directory in an HDFS-compatible fault-tolerant file system. The semantics of checkpointing is discussed in more detail in the next section.

Output Modes

There are a few types of output modes.

  • Append mode (default) - This is the default mode, where only the new rows added to the Result Table since the last trigger will be outputted to the sink. This is supported for only those queries where rows added to the Result Table is never going to change. Hence, this mode guarantees that each row will be output only once (assuming fault-tolerant sink). For example, queries with only select, where, map, flatMap, filter, join, etc. will support Append mode.

  • Complete mode - The whole Result Table will be outputted to the sink after every trigger. This is supported for aggregation queries.

  • Update mode - (Available since Spark 2.1.1) Only the rows in the Result Table that were updated since the last trigger will be outputted to the sink. More information to be added in future releases.

Different types of streaming queries support different output modes. Here is the compatibility matrix.

Query Type Supported Output Modes Notes
Queries with aggregation Aggregation on event-time with watermark Append, Update, Complete Append mode uses watermark to drop old aggregation state. But the output of a windowed aggregation is delayed the late threshold specified in `withWatermark()` as by the modes semantics, rows can be added to the Result Table only once after they are finalized (i.e. after watermark is crossed). See the Late Data section for more details.

Update mode uses watermark to drop old aggregation state.

Complete mode does not drop old aggregation state since by definition this mode preserves all data in the Result Table.
Other aggregations Complete, Update Since no watermark is defined (only defined in other category), old aggregation state is not dropped.

Append mode is not supported as aggregates can update thus violating the semantics of this mode.
Queries with mapGroupsWithState Update
Queries with flatMapGroupsWithState Append operation mode Append Aggregations are allowed after flatMapGroupsWithState.
Update operation mode Update Aggregations not allowed after flatMapGroupsWithState.
Other queries Append, Update Complete mode not supported as it is infeasible to keep all unaggregated data in the Result Table.

Output Sinks

There are a few types of built-in output sinks.

  • File sink - Stores the output to a directory.
    writeStream
        .format("parquet")        // can be "orc", "json", "csv", etc.
        .option("path", "path/to/destination/dir")
        .start()
  • Foreach sink - Runs arbitrary computation on the records in the output. See later in the section for more details.
    writeStream
        .foreach(...)
        .start()
  • Console sink (for debugging) - Prints the output to the console/stdout every time there is a trigger. Both, Append and Complete output modes, are supported. This should be used for debugging purposes on low data volumes as the entire output is collected and stored in the driver’s memory after every trigger.
    writeStream
        .format("console")
        .start()
  • Memory sink (for debugging) - The output is stored in memory as an in-memory table. Both, Append and Complete output modes, are supported. This should be used for debugging purposes on low data volumes as the entire output is collected and stored in the driver’s memory. Hence, use it with caution.
    writeStream
        .format("memory")
        .queryName("tableName")
        .start()

Some sinks are not fault-tolerant because they do not guarantee persistence of the output and are meant for debugging purposes only. See the earlier section on fault-tolerance semantics. Here are the details of all the sinks in Spark.

Sink Supported Output Modes Options Fault-tolerant Notes
File Sink Append path: path to the output directory, must be specified.

For file-format-specific options, see the related methods in DataFrameWriter (Scala/Java/Python/R). E.g. for "parquet" format options see DataFrameWriter.parquet()
Yes Supports writes to partitioned tables. Partitioning by time may be useful.
Foreach Sink Append, Update, Compelete None Depends on ForeachWriter implementation More details in the next section
Console Sink Append, Update, Complete numRows: Number of rows to print every trigger (default: 20)
truncate: Whether to truncate the output if too long (default: true)
No
Memory Sink Append, Complete None No. But in Complete Mode, restarted query will recreate the full table. Table name is the query name.

Note that you have to call start() to actually start the execution of the query. This returns a StreamingQuery object which is a handle to the continuously running execution. You can use this object to manage the query, which we will discuss in the next subsection. For now, let’s understand all this with a few examples.

``` // ========== DF with no aggregations ========== val noAggDF = deviceDataDf.select("device").where("signal > 10")

// Print new data to console
noAggDF
  .writeStream
  .format("console")
  .start()

// Write new data to Parquet files
noAggDF
  .writeStream
  .format("parquet")
  .option("checkpointLocation", "path/to/checkpoint/dir")
  .option("path", "path/to/destination/dir")
  .start()

// ========== DF with aggregation ==========
val aggDF = df.groupBy("device").count()

// Print updated aggregations to console
aggDF
  .writeStream
  .outputMode("complete")
  .format("console")
  .start()

// Have all the aggregates in an in-memory table
aggDF
  .writeStream
  .queryName("aggregates")    // this query name will be the table name
  .outputMode("complete")
  .format("memory")
  .start()

spark.sql("select * from aggregates").show()   // interactively query in-memory table
```

Using Foreach

The foreach operation allows arbitrary operations to be computed on the output data. As of Spark 2.1, this is available only for Scala and Java. To use this, you will have to implement the interface ForeachWriter (Scala/Java docs), which has methods that get called whenever there is a sequence of rows generated as output after a trigger. Note the following important points.

  • The writer must be serializable, as it will be serialized and sent to the executors for execution.

  • All the three methods, open, process and close will be called on the executors.

  • The writer must do all the initialization (e.g. opening connections, starting a transaction, etc.) only when the open method is called. Be aware that, if there is any initialization in the class as soon as the object is created, then that initialization will happen in the driver (because that is where the instance is being created), which may not be what you intend.

  • version and partition are two parameters in open that uniquely represent a set of rows that needs to be pushed out. version is a monotonically increasing id that increases with every trigger. partition is an id that represents a partition of the output, since the output is distributed and will be processed on multiple executors.

  • open can use the version and partition to choose whether it needs to write the sequence of rows. Accordingly, it can return true (proceed with writing), or false (no need to write). If false is returned, then process will not be called on any row. For example, after a partial failure, some of the output partitions of the failed trigger may have already been committed to a database. Based on metadata stored in the database, the writer can identify partitions that have already been committed and accordingly return false to skip committing them again.

  • Whenever open is called, close will also be called (unless the JVM exits due to some error). This is true even if open returns false. If there is any error in processing and writing the data, close will be called with the error. It is your responsibility to clean up state (e.g. connections, transactions, etc.) that have been created in open such that there are no resource leaks.

Managing Streaming Queries

The StreamingQuery object created when a query is started can be used to monitor and manage the query.

    val query = df.writeStream.format("console").start()   // get the query object

    query.id          // get the unique identifier of the running query that persists across restarts from checkpoint data

    query.runId       // get the unique id of this run of the query, which will be generated at every start/restart

    query.name        // get the name of the auto-generated or user-specified name

    query.explain()   // print detailed explanations of the query

    query.stop()      // stop the query

    query.awaitTermination()   // block until query is terminated, with stop() or with error

    query.exception       // the exception if the query has been terminated with error

    query.recentProgress  // an array of the most recent progress updates for this query

    query.lastProgress    // the most recent progress update of this streaming query

You can start any number of queries in a single SparkSession. They will all be running concurrently sharing the cluster resources. You can use sparkSession.streams() to get the StreamingQueryManager (Scala/Java/Python docs) that can be used to manage the currently active queries.

    val spark: SparkSession = ...

    spark.streams.active    // get the list of currently active streaming queries

    spark.streams.get(id)   // get a query object by its unique id

    spark.streams.awaitAnyTermination()   // block until any one of them terminates

Monitoring Streaming Queries

There are two APIs for monitoring and debugging active queries - interactively and asynchronously.

Interactive APIs

You can directly get the current status and metrics of an active query using streamingQuery.lastProgress() and streamingQuery.status(). lastProgress() returns a StreamingQueryProgress object in Scala and Java and a dictionary with the same fields in Python. It has all the information about the progress made in the last trigger of the stream - what data was processed, what were the processing rates, latencies, etc. There is also streamingQuery.recentProgress which returns an array of last few progresses.

In addition, streamingQuery.status() returns a StreamingQueryStatus object in Scala and Java and a dictionary with the same fields in Python. It gives information about what the query is immediately doing - is a trigger active, is data being processed, etc.

Here are a few examples.

    val query: StreamingQuery = ...

    println(query.lastProgress)

    /* Will print something like the following.

    {
      "id" : "ce011fdc-8762-4dcb-84eb-a77333e28109",
      "runId" : "88e2ff94-ede0-45a8-b687-6316fbef529a",
      "name" : "MyQuery",
      "timestamp" : "2016-12-14T18:45:24.873Z",
      "numInputRows" : 10,
      "inputRowsPerSecond" : 120.0,
      "processedRowsPerSecond" : 200.0,
      "durationMs" : {
        "triggerExecution" : 3,
        "getOffset" : 2
      },
      "eventTime" : {
        "watermark" : "2016-12-14T18:45:24.873Z"
      },
      "stateOperators" : [ ],
      "sources" : [ {
        "description" : "KafkaSource[Subscribe[topic-0]]",
        "startOffset" : {
          "topic-0" : {
            "2" : 0,
            "4" : 1,
            "1" : 1,
            "3" : 1,
            "0" : 1
          }
        },
        "endOffset" : {
          "topic-0" : {
            "2" : 0,
            "4" : 115,
            "1" : 134,
            "3" : 21,
            "0" : 534
          }
        },
        "numInputRows" : 10,
        "inputRowsPerSecond" : 120.0,
        "processedRowsPerSecond" : 200.0
      } ],
      "sink" : {
        "description" : "MemorySink"
      }
    }
    */


    println(query.status)

    /*  Will print something like the following.
    {
      "message" : "Waiting for data to arrive",
      "isDataAvailable" : false,
      "isTriggerActive" : false
    }
    */

Asynchronous API

You can also asynchronously monitor all queries associated with a SparkSession by attaching a StreamingQueryListener (Scala/Java docs). Once you attach your custom StreamingQueryListener object with sparkSession.streams.attachListener(), you will get callbacks when a query is started and stopped and when there is progress made in an active query. Here is an example,

    val spark: SparkSession = ...

    spark.streams.addListener(new StreamingQueryListener() {
        override def onQueryStarted(queryStarted: QueryStartedEvent): Unit = {
            println("Query started: " + queryStarted.id)
        }
        override def onQueryTerminated(queryTerminated: QueryTerminatedEvent): Unit = {
            println("Query terminated: " + queryTerminated.id)
        }
        override def onQueryProgress(queryProgress: QueryProgressEvent): Unit = {
            println("Query made progress: " + queryProgress.progress)
        }
    })

Recovering from Failures with Checkpointing

In case of a failure or intentional shutdown, you can recover the previous progress and state of a previous query, and continue where it left off. This is done using checkpointing and write ahead logs. You can configure a query with a checkpoint location, and the query will save all the progress information (i.e. range of offsets processed in each trigger) and the running aggregates (e.g. word counts in the quick example to the checkpoint location. This checkpoint location has to be a path in an HDFS compatible file system, and can be set as an option in the DataStreamWriter when starting a query.

    aggDF
      .writeStream
      .outputMode("complete")
      .option("checkpointLocation", "path/to/HDFS/dir")
      .format("memory")
      .start()

ScaDaMaLe Course site and book

Structured Streaming using Scala DataFrames API - Exercise

Apache Spark 2.0+ adds the first version of a new higher-level stream processing API, Structured Streaming. In this notebook we are going to take a quick look at how to use DataFrame API to build Structured Streaming applications. We want to compute real-time metrics like running counts and windowed counts on a stream of timestamped actions (e.g. Open, Close, etc).

This is built on the public databricks notebook importable from here.

// Spark 3.0.1 scala 2.12
spark.version
res1: String = 3.0.1

Sample Data

We have some sample action data as files in /databricks-datasets/structured-streaming/events/ which we are going to use to build this appication. Let's take a look at the contents of this directory.

ls /databricks-datasets/structured-streaming/events/
path name size
dbfs:/databricks-datasets/structured-streaming/events/file-0.json file-0.json 72530.0
dbfs:/databricks-datasets/structured-streaming/events/file-1.json file-1.json 72961.0
dbfs:/databricks-datasets/structured-streaming/events/file-10.json file-10.json 73025.0
dbfs:/databricks-datasets/structured-streaming/events/file-11.json file-11.json 72999.0
dbfs:/databricks-datasets/structured-streaming/events/file-12.json file-12.json 72987.0
dbfs:/databricks-datasets/structured-streaming/events/file-13.json file-13.json 73006.0
dbfs:/databricks-datasets/structured-streaming/events/file-14.json file-14.json 73003.0
dbfs:/databricks-datasets/structured-streaming/events/file-15.json file-15.json 73007.0
dbfs:/databricks-datasets/structured-streaming/events/file-16.json file-16.json 72978.0
dbfs:/databricks-datasets/structured-streaming/events/file-17.json file-17.json 73008.0
dbfs:/databricks-datasets/structured-streaming/events/file-18.json file-18.json 73002.0
dbfs:/databricks-datasets/structured-streaming/events/file-19.json file-19.json 73014.0
dbfs:/databricks-datasets/structured-streaming/events/file-2.json file-2.json 73007.0
dbfs:/databricks-datasets/structured-streaming/events/file-20.json file-20.json 72987.0
dbfs:/databricks-datasets/structured-streaming/events/file-21.json file-21.json 72983.0
dbfs:/databricks-datasets/structured-streaming/events/file-22.json file-22.json 73009.0
dbfs:/databricks-datasets/structured-streaming/events/file-23.json file-23.json 72985.0
dbfs:/databricks-datasets/structured-streaming/events/file-24.json file-24.json 73020.0
dbfs:/databricks-datasets/structured-streaming/events/file-25.json file-25.json 72980.0
dbfs:/databricks-datasets/structured-streaming/events/file-26.json file-26.json 73002.0
dbfs:/databricks-datasets/structured-streaming/events/file-27.json file-27.json 73013.0
dbfs:/databricks-datasets/structured-streaming/events/file-28.json file-28.json 73005.0
dbfs:/databricks-datasets/structured-streaming/events/file-29.json file-29.json 72977.0
dbfs:/databricks-datasets/structured-streaming/events/file-3.json file-3.json 72996.0
dbfs:/databricks-datasets/structured-streaming/events/file-30.json file-30.json 73009.0
dbfs:/databricks-datasets/structured-streaming/events/file-31.json file-31.json 73008.0
dbfs:/databricks-datasets/structured-streaming/events/file-32.json file-32.json 72982.0
dbfs:/databricks-datasets/structured-streaming/events/file-33.json file-33.json 73033.0
dbfs:/databricks-datasets/structured-streaming/events/file-34.json file-34.json 72985.0
dbfs:/databricks-datasets/structured-streaming/events/file-35.json file-35.json 72974.0
dbfs:/databricks-datasets/structured-streaming/events/file-36.json file-36.json 73013.0
dbfs:/databricks-datasets/structured-streaming/events/file-37.json file-37.json 72989.0
dbfs:/databricks-datasets/structured-streaming/events/file-38.json file-38.json 72999.0
dbfs:/databricks-datasets/structured-streaming/events/file-39.json file-39.json 73013.0
dbfs:/databricks-datasets/structured-streaming/events/file-4.json file-4.json 72992.0
dbfs:/databricks-datasets/structured-streaming/events/file-40.json file-40.json 72986.0
dbfs:/databricks-datasets/structured-streaming/events/file-41.json file-41.json 73019.0
dbfs:/databricks-datasets/structured-streaming/events/file-42.json file-42.json 72986.0
dbfs:/databricks-datasets/structured-streaming/events/file-43.json file-43.json 72990.0
dbfs:/databricks-datasets/structured-streaming/events/file-44.json file-44.json 73018.0
dbfs:/databricks-datasets/structured-streaming/events/file-45.json file-45.json 72997.0
dbfs:/databricks-datasets/structured-streaming/events/file-46.json file-46.json 72991.0
dbfs:/databricks-datasets/structured-streaming/events/file-47.json file-47.json 73009.0
dbfs:/databricks-datasets/structured-streaming/events/file-48.json file-48.json 72993.0
dbfs:/databricks-datasets/structured-streaming/events/file-49.json file-49.json 73496.0
dbfs:/databricks-datasets/structured-streaming/events/file-5.json file-5.json 72998.0
dbfs:/databricks-datasets/structured-streaming/events/file-6.json file-6.json 72997.0
dbfs:/databricks-datasets/structured-streaming/events/file-7.json file-7.json 73022.0
dbfs:/databricks-datasets/structured-streaming/events/file-8.json file-8.json 72997.0
dbfs:/databricks-datasets/structured-streaming/events/file-9.json file-9.json 72970.0

There are about 50 JSON files in the directory. Let's see what each JSON file contains.

head /databricks-datasets/structured-streaming/events/file-0.json
[Truncated to first 65536 bytes]
{"time":1469501107,"action":"Open"}
{"time":1469501147,"action":"Open"}
{"time":1469501202,"action":"Open"}
{"time":1469501219,"action":"Open"}
{"time":1469501225,"action":"Open"}
{"time":1469501234,"action":"Open"}
{"time":1469501245,"action":"Open"}
{"time":1469501246,"action":"Open"}
{"time":1469501248,"action":"Open"}
{"time":1469501256,"action":"Open"}
{"time":1469501264,"action":"Open"}
{"time":1469501266,"action":"Open"}
{"time":1469501267,"action":"Open"}
{"time":1469501269,"action":"Open"}
{"time":1469501271,"action":"Open"}
{"time":1469501282,"action":"Open"}
{"time":1469501285,"action":"Open"}
{"time":1469501291,"action":"Open"}
{"time":1469501297,"action":"Open"}
{"time":1469501303,"action":"Open"}
{"time":1469501322,"action":"Open"}
{"time":1469501335,"action":"Open"}
{"time":1469501344,"action":"Open"}
{"time":1469501346,"action":"Open"}
{"time":1469501349,"action":"Open"}
{"time":1469501357,"action":"Open"}
{"time":1469501366,"action":"Open"}
{"time":1469501371,"action":"Open"}
{"time":1469501375,"action":"Open"}
{"time":1469501375,"action":"Open"}
{"time":1469501381,"action":"Open"}
{"time":1469501392,"action":"Open"}
{"time":1469501402,"action":"Open"}
{"time":1469501407,"action":"Open"}
{"time":1469501410,"action":"Open"}
{"time":1469501420,"action":"Open"}
{"time":1469501424,"action":"Open"}
{"time":1469501438,"action":"Open"}
{"time":1469501442,"action":"Close"}
{"time":1469501462,"action":"Open"}
{"time":1469501480,"action":"Open"}
{"time":1469501488,"action":"Open"}
{"time":1469501489,"action":"Open"}
{"time":1469501491,"action":"Open"}
{"time":1469501503,"action":"Open"}
{"time":1469501505,"action":"Open"}
{"time":1469501509,"action":"Open"}
{"time":1469501513,"action":"Open"}
{"time":1469501517,"action":"Open"}
{"time":1469501520,"action":"Open"}
{"time":1469501525,"action":"Open"}
{"time":1469501533,"action":"Open"}
{"time":1469501539,"action":"Open"}
{"time":1469501540,"action":"Open"}
{"time":1469501541,"action":"Open"}
{"time":1469501543,"action":"Open"}
{"time":1469501544,"action":"Open"}
{"time":1469501545,"action":"Close"}
{"time":1469501545,"action":"Open"}
{"time":1469501547,"action":"Open"}
{"time":1469501552,"action":"Open"}
{"time":1469501557,"action":"Open"}
{"time":1469501559,"action":"Open"}
{"time":1469501560,"action":"Open"}
{"time":1469501560,"action":"Open"}
{"time":1469501565,"action":"Open"}
{"time":1469501566,"action":"Open"}
{"time":1469501574,"action":"Open"}
{"time":1469501575,"action":"Open"}
{"time":1469501575,"action":"Open"}
{"time":1469501578,"action":"Open"}
{"time":1469501581,"action":"Open"}
{"time":1469501584,"action":"Open"}
{"time":1469501600,"action":"Open"}
{"time":1469501601,"action":"Open"}
{"time":1469501603,"action":"Open"}
{"time":1469501610,"action":"Open"}
{"time":1469501620,"action":"Open"}
{"time":1469501621,"action":"Open"}
{"time":1469501625,"action":"Open"}
{"time":1469501625,"action":"Close"}
{"time":1469501626,"action":"Open"}
{"time":1469501631,"action":"Open"}
{"time":1469501632,"action":"Open"}
{"time":1469501632,"action":"Open"}
{"time":1469501638,"action":"Open"}
{"time":1469501643,"action":"Open"}
{"time":1469501646,"action":"Open"}
{"time":1469501662,"action":"Open"}
{"time":1469501662,"action":"Open"}
{"time":1469501662,"action":"Open"}
{"time":1469501663,"action":"Open"}
{"time":1469501667,"action":"Open"}
{"time":1469501674,"action":"Open"}
{"time":1469501675,"action":"Open"}
{"time":1469501678,"action":"Close"}
{"time":1469501680,"action":"Open"}
{"time":1469501685,"action":"Open"}
{"time":1469501686,"action":"Open"}
{"time":1469501689,"action":"Open"}
{"time":1469501691,"action":"Open"}
{"time":1469501694,"action":"Open"}
{"time":1469501696,"action":"Close"}
{"time":1469501702,"action":"Open"}
{"time":1469501703,"action":"Open"}
{"time":1469501704,"action":"Open"}
{"time":1469501706,"action":"Open"}
{"time":1469501706,"action":"Open"}
{"time":1469501710,"action":"Open"}
{"time":1469501715,"action":"Open"}
{"time":1469501717,"action":"Open"}
{"time":1469501719,"action":"Open"}
{"time":1469501719,"action":"Open"}
{"time":1469501734,"action":"Open"}
{"time":1469501739,"action":"Open"}
{"time":1469501740,"action":"Open"}
{"time":1469501747,"action":"Open"}
{"time":1469501749,"action":"Open"}
{"time":1469501749,"action":"Close"}
{"time":1469501754,"action":"Open"}
{"time":1469501755,"action":"Open"}
{"time":1469501756,"action":"Open"}
{"time":1469501756,"action":"Open"}
{"time":1469501757,"action":"Open"}
{"time":1469501758,"action":"Open"}
{"time":1469501759,"action":"Open"}
{"time":1469501761,"action":"Open"}
{"time":1469501764,"action":"Open"}
{"time":1469501772,"action":"Open"}
{"time":1469501772,"action":"Open"}
{"time":1469501776,"action":"Close"}
{"time":1469501780,"action":"Open"}
{"time":1469501782,"action":"Open"}
{"time":1469501783,"action":"Open"}
{"time":1469501785,"action":"Open"}
{"time":1469501789,"action":"Open"}
{"time":1469501795,"action":"Open"}
{"time":1469501802,"action":"Open"}
{"time":1469501802,"action":"Open"}
{"time":1469501806,"action":"Open"}
{"time":1469501813,"action":"Open"}
{"time":1469501817,"action":"Open"}
{"time":1469501818,"action":"Open"}
{"time":1469501819,"action":"Close"}
{"time":1469501828,"action":"Open"}
{"time":1469501829,"action":"Open"}
{"time":1469501830,"action":"Open"}
{"time":1469501833,"action":"Open"}
{"time":1469501835,"action":"Open"}
{"time":1469501837,"action":"Open"}
{"time":1469501838,"action":"Open"}
{"time":1469501840,"action":"Open"}
{"time":1469501845,"action":"Open"}
{"time":1469501848,"action":"Open"}
{"time":1469501853,"action":"Open"}
{"time":1469501855,"action":"Open"}
{"time":1469501861,"action":"Close"}
{"time":1469501861,"action":"Open"}
{"time":1469501862,"action":"Open"}
{"time":1469501863,"action":"Open"}
{"time":1469501865,"action":"Open"}
{"time":1469501873,"action":"Open"}
{"time":1469501884,"action":"Open"}
{"time":1469501895,"action":"Open"}
{"time":1469501904,"action":"Open"}
{"time":1469501907,"action":"Open"}
{"time":1469501909,"action":"Close"}
{"time":1469501909,"action":"Open"}
{"time":1469501911,"action":"Open"}
{"time":1469501929,"action":"Open"}
{"time":1469501930,"action":"Open"}
{"time":1469501930,"action":"Open"}
{"time":1469501931,"action":"Open"}
{"time":1469501935,"action":"Open"}
{"time":1469501935,"action":"Open"}
{"time":1469501946,"action":"Open"}
{"time":1469501946,"action":"Open"}
{"time":1469501959,"action":"Open"}
{"time":1469501967,"action":"Open"}
{"time":1469501972,"action":"Close"}
{"time":1469501976,"action":"Open"}
{"time":1469501978,"action":"Open"}
{"time":1469501978,"action":"Open"}
{"time":1469501978,"action":"Open"}
{"time":1469501980,"action":"Open"}
{"time":1469501980,"action":"Open"}
{"time":1469501985,"action":"Open"}
{"time":1469501988,"action":"Open"}
{"time":1469501992,"action":"Open"}
{"time":1469501996,"action":"Open"}
{"time":1469502005,"action":"Open"}
{"time":1469502010,"action":"Open"}
{"time":1469502014,"action":"Close"}
{"time":1469502020,"action":"Open"}
{"time":1469502022,"action":"Open"}
{"time":1469502022,"action":"Open"}
{"time":1469502031,"action":"Open"}
{"time":1469502031,"action":"Open"}
{"time":1469502033,"action":"Open"}
{"time":1469502035,"action":"Open"}
{"time":1469502038,"action":"Open"}
{"time":1469502044,"action":"Open"}
{"time":1469502054,"action":"Open"}
{"time":1469502054,"action":"Open"}
{"time":1469502054,"action":"Open"}
{"time":1469502057,"action":"Open"}
{"time":1469502060,"action":"Open"}
{"time":1469502065,"action":"Open"}
{"time":1469502067,"action":"Open"}
{"time":1469502071,"action":"Open"}
{"time":1469502071,"action":"Open"}
{"time":1469502072,"action":"Close"}
{"time":1469502073,"action":"Open"}
{"time":1469502077,"action":"Open"}
{"time":1469502080,"action":"Open"}
{"time":1469502092,"action":"Open"}
{"time":1469502097,"action":"Open"}
{"time":1469502105,"action":"Open"}
{"time":1469502109,"action":"Open"}
{"time":1469502118,"action":"Open"}
{"time":1469502126,"action":"Open"}
{"time":1469502127,"action":"Open"}
{"time":1469502130,"action":"Open"}
{"time":1469502130,"action":"Open"}
{"time":1469502132,"action":"Open"}
{"time":1469502135,"action":"Open"}
{"time":1469502144,"action":"Open"}
{"time":1469502145,"action":"Open"}
{"time":1469502147,"action":"Open"}
{"time":1469502148,"action":"Close"}
{"time":1469502154,"action":"Open"}
{"time":1469502157,"action":"Open"}
{"time":1469502165,"action":"Open"}
{"time":1469502177,"action":"Open"}
{"time":1469502181,"action":"Open"}
{"time":1469502181,"action":"Open"}
{"time":1469502182,"action":"Open"}
{"time":1469502184,"action":"Open"}
{"time":1469502184,"action":"Open"}
{"time":1469502190,"action":"Open"}
{"time":1469502194,"action":"Open"}
{"time":1469502201,"action":"Open"}
{"time":1469502202,"action":"Open"}
{"time":1469502205,"action":"Open"}
{"time":1469502206,"action":"Open"}
{"time":1469502211,"action":"Open"}
{"time":1469502217,"action":"Open"}
{"time":1469502218,"action":"Open"}
{"time":1469502229,"action":"Open"}
{"time":1469502231,"action":"Open"}
{"time":1469502231,"action":"Open"}
{"time":1469502234,"action":"Open"}
{"time":1469502236,"action":"Open"}
{"time":1469502241,"action":"Open"}
{"time":1469502244,"action":"Open"}
{"time":1469502245,"action":"Open"}
{"time":1469502246,"action":"Open"}
{"time":1469502253,"action":"Open"}
{"time":1469502257,"action":"Open"}
{"time":1469502258,"action":"Open"}
{"time":1469502259,"action":"Open"}
{"time":1469502259,"action":"Open"}
{"time":1469502261,"action":"Close"}
{"time":1469502267,"action":"Open"}
{"time":1469502269,"action":"Open"}
{"time":1469502269,"action":"Open"}
{"time":1469502270,"action":"Open"}
{"time":1469502272,"action":"Open"}
{"time":1469502272,"action":"Open"}
{"time":1469502273,"action":"Open"}
{"time":1469502273,"action":"Open"}
{"time":1469502275,"action":"Open"}
{"time":1469502277,"action":"Open"}
{"time":1469502279,"action":"Open"}
{"time":1469502279,"action":"Open"}
{"time":1469502282,"action":"Close"}
{"time":1469502285,"action":"Open"}
{"time":1469502286,"action":"Open"}
{"time":1469502292,"action":"Open"}
{"time":1469502294,"action":"Open"}
{"time":1469502298,"action":"Open"}
{"time":1469502301,"action":"Open"}
{"time":1469502302,"action":"Open"}
{"time":1469502304,"action":"Open"}
{"time":1469502308,"action":"Open"}
{"time":1469502318,"action":"Open"}
{"time":1469502323,"action":"Open"}
{"time":1469502328,"action":"Open"}
{"time":1469502333,"action":"Open"}
{"time":1469502336,"action":"Close"}
{"time":1469502338,"action":"Close"}
{"time":1469502346,"action":"Open"}
{"time":1469502348,"action":"Open"}
{"time":1469502350,"action":"Open"}
{"time":1469502351,"action":"Close"}
{"time":1469502357,"action":"Close"}
{"time":1469502361,"action":"Open"}
{"time":1469502361,"action":"Open"}
{"time":1469502364,"action":"Open"}
{"time":1469502365,"action":"Open"}
{"time":1469502367,"action":"Open"}
{"time":1469502369,"action":"Open"}
{"time":1469502372,"action":"Open"}
{"time":1469502374,"action":"Open"}
{"time":1469502377,"action":"Open"}
{"time":1469502379,"action":"Close"}
{"time":1469502379,"action":"Open"}
{"time":1469502382,"action":"Open"}
{"time":1469502385,"action":"Open"}
{"time":1469502388,"action":"Open"}
{"time":1469502404,"action":"Open"}
{"time":1469502411,"action":"Open"}
{"time":1469502416,"action":"Open"}
{"time":1469502416,"action":"Open"}
{"time":1469502417,"action":"Close"}
{"time":1469502422,"action":"Open"}
{"time":1469502429,"action":"Open"}
{"time":1469502430,"action":"Open"}
{"time":1469502430,"action":"Open"}
{"time":1469502432,"action":"Open"}
{"time":1469502432,"action":"Open"}
{"time":1469502433,"action":"Open"}
{"time":1469502444,"action":"Open"}
{"time":1469502445,"action":"Open"}
{"time":1469502446,"action":"Open"}
{"time":1469502446,"action":"Open"}
{"time":1469502453,"action":"Open"}
{"time":1469502456,"action":"Close"}
{"time":1469502464,"action":"Open"}
{"time":1469502470,"action":"Open"}
{"time":1469502471,"action":"Open"}
{"time":1469502472,"action":"Open"}
{"time":1469502474,"action":"Open"}
{"time":1469502475,"action":"Open"}
{"time":1469502480,"action":"Open"}
{"time":1469502481,"action":"Open"}
{"time":1469502490,"action":"Open"}
{"time":1469502497,"action":"Close"}
{"time":1469502497,"action":"Open"}
{"time":1469502497,"action":"Close"}
{"time":1469502500,"action":"Close"}
{"time":1469502500,"action":"Open"}
{"time":1469502501,"action":"Open"}
{"time":1469502507,"action":"Close"}
{"time":1469502507,"action":"Open"}
{"time":1469502508,"action":"Open"}
{"time":1469502512,"action":"Open"}
{"time":1469502514,"action":"Open"}
{"time":1469502515,"action":"Open"}
{"time":1469502517,"action":"Close"}
{"time":1469502527,"action":"Open"}
{"time":1469502527,"action":"Open"}
{"time":1469502529,"action":"Open"}
{"time":1469502538,"action":"Open"}
{"time":1469502549,"action":"Open"}
{"time":1469502553,"action":"Open"}
{"time":1469502555,"action":"Open"}
{"time":1469502560,"action":"Open"}
{"time":1469502561,"action":"Open"}
{"time":1469502561,"action":"Open"}
{"time":1469502562,"action":"Open"}
{"time":1469502564,"action":"Close"}
{"time":1469502573,"action":"Open"}
{"time":1469502575,"action":"Open"}
{"time":1469502583,"action":"Open"}
{"time":1469502585,"action":"Open"}
{"time":1469502587,"action":"Open"}
{"time":1469502590,"action":"Open"}
{"time":1469502593,"action":"Open"}
{"time":1469502595,"action":"Close"}
{"time":1469502596,"action":"Open"}
{"time":1469502609,"action":"Open"}
{"time":1469502609,"action":"Open"}
{"time":1469502611,"action":"Open"}
{"time":1469502612,"action":"Open"}
{"time":1469502613,"action":"Open"}
{"time":1469502614,"action":"Open"}
{"time":1469502619,"action":"Open"}
{"time":1469502626,"action":"Close"}
{"time":1469502626,"action":"Open"}
{"time":1469502627,"action":"Open"}
{"time":1469502629,"action":"Open"}
{"time":1469502635,"action":"Open"}
{"time":1469502641,"action":"Open"}
{"time":1469502641,"action":"Open"}
{"time":1469502643,"action":"Close"}
{"time":1469502647,"action":"Open"}
{"time":1469502649,"action":"Open"}
{"time":1469502654,"action":"Open"}
{"time":1469502655,"action":"Open"}
{"time":1469502656,"action":"Open"}
{"time":1469502660,"action":"Close"}
{"time":1469502661,"action":"Close"}
{"time":1469502663,"action":"Open"}
{"time":1469502668,"action":"Open"}
{"time":1469502675,"action":"Open"}
{"time":1469502678,"action":"Open"}
{"time":1469502683,"action":"Open"}
{"time":1469502686,"action":"Open"}
{"time":1469502687,"action":"Open"}
{"time":1469502688,"action":"Open"}
{"time":1469502693,"action":"Open"}
{"time":1469502695,"action":"Open"}
{"time":1469502704,"action":"Open"}
{"time":1469502708,"action":"Close"}
{"time":1469502716,"action":"Open"}
{"time":1469502717,"action":"Open"}
{"time":1469502726,"action":"Open"}
{"time":1469502727,"action":"Open"}
{"time":1469502729,"action":"Open"}
{"time":1469502732,"action":"Open"}
{"time":1469502733,"action":"Open"}
{"time":1469502735,"action":"Open"}
{"time":1469502736,"action":"Open"}
{"time":1469502742,"action":"Open"}
{"time":1469502745,"action":"Open"}
{"time":1469502746,"action":"Open"}
{"time":1469502752,"action":"Open"}
{"time":1469502753,"action":"Open"}
{"time":1469502754,"action":"Open"}
{"time":1469502757,"action":"Open"}
{"time":1469502757,"action":"Open"}
{"time":1469502771,"action":"Open"}
{"time":1469502778,"action":"Open"}
{"time":1469502782,"action":"Open"}
{"time":1469502783,"action":"Close"}
{"time":1469502783,"action":"Open"}
{"time":1469502789,"action":"Open"}
{"time":1469502800,"action":"Open"}
{"time":1469502800,"action":"Open"}
{"time":1469502801,"action":"Open"}
{"time":1469502809,"action":"Close"}
{"time":1469502811,"action":"Open"}
{"time":1469502813,"action":"Close"}
{"time":1469502814,"action":"Open"}
{"time":1469502817,"action":"Open"}
{"time":1469502820,"action":"Open"}
{"time":1469502822,"action":"Close"}
{"time":1469502822,"action":"Open"}
{"time":1469502831,"action":"Close"}
{"time":1469502831,"action":"Open"}
{"time":1469502832,"action":"Close"}
{"time":1469502833,"action":"Open"}
{"time":1469502839,"action":"Open"}
{"time":1469502842,"action":"Close"}
{"time":1469502844,"action":"Open"}
{"time":1469502849,"action":"Open"}
{"time":1469502850,"action":"Open"}
{"time":1469502851,"action":"Open"}
{"time":1469502851,"action":"Open"}
{"time":1469502852,"action":"Open"}
{"time":1469502853,"action":"Open"}
{"time":1469502855,"action":"Open"}
{"time":1469502856,"action":"Open"}
{"time":1469502857,"action":"Open"}
{"time":1469502857,"action":"Open"}
{"time":1469502858,"action":"Open"}
{"time":1469502861,"action":"Open"}
{"time":1469502861,"action":"Open"}
{"time":1469502863,"action":"Close"}
{"time":1469502865,"action":"Open"}
{"time":1469502867,"action":"Open"}
{"time":1469502867,"action":"Open"}
{"time":1469502868,"action":"Open"}
{"time":1469502873,"action":"Open"}
{"time":1469502880,"action":"Close"}
{"time":1469502881,"action":"Close"}
{"time":1469502886,"action":"Open"}
{"time":1469502887,"action":"Open"}
{"time":1469502887,"action":"Open"}
{"time":1469502893,"action":"Close"}
{"time":1469502897,"action":"Open"}
{"time":1469502907,"action":"Open"}
{"time":1469502907,"action":"Open"}
{"time":1469502911,"action":"Close"}
{"time":1469502912,"action":"Open"}
{"time":1469502913,"action":"Open"}
{"time":1469502919,"action":"Open"}
{"time":1469502920,"action":"Open"}
{"time":1469502922,"action":"Open"}
{"time":1469502922,"action":"Open"}
{"time":1469502925,"action":"Open"}
{"time":1469502927,"action":"Open"}
{"time":1469502931,"action":"Open"}
{"time":1469502932,"action":"Open"}
{"time":1469502941,"action":"Open"}
{"time":1469502941,"action":"Open"}
{"time":1469502942,"action":"Open"}
{"time":1469502945,"action":"Close"}
{"time":1469502946,"action":"Close"}
{"time":1469502947,"action":"Open"}
{"time":1469502954,"action":"Close"}
{"time":1469502959,"action":"Open"}
{"time":1469502964,"action":"Close"}
{"time":1469502964,"action":"Open"}
{"time":1469502969,"action":"Close"}
{"time":1469502972,"action":"Close"}
{"time":1469502973,"action":"Close"}
{"time":1469502973,"action":"Open"}
{"time":1469502974,"action":"Open"}
{"time":1469502975,"action":"Close"}
{"time":1469502984,"action":"Open"}
{"time":1469502985,"action":"Open"}
{"time":1469502986,"action":"Close"}
{"time":1469502988,"action":"Open"}
{"time":1469502988,"action":"Open"}
{"time":1469502992,"action":"Open"}
{"time":1469502997,"action":"Open"}
{"time":1469503000,"action":"Open"}
{"time":1469503005,"action":"Open"}
{"time":1469503007,"action":"Open"}
{"time":1469503014,"action":"Open"}
{"time":1469503014,"action":"Open"}
{"time":1469503021,"action":"Open"}
{"time":1469503024,"action":"Open"}
{"time":1469503025,"action":"Open"}
{"time":1469503025,"action":"Open"}
{"time":1469503030,"action":"Open"}
{"time":1469503036,"action":"Open"}
{"time":1469503039,"action":"Open"}
{"time":1469503039,"action":"Open"}
{"time":1469503042,"action":"Open"}
{"time":1469503043,"action":"Open"}
{"time":1469503048,"action":"Open"}
{"time":1469503060,"action":"Open"}
{"time":1469503065,"action":"Close"}
{"time":1469503065,"action":"Open"}
{"time":1469503066,"action":"Open"}
{"time":1469503067,"action":"Open"}
{"time":1469503071,"action":"Open"}
{"time":1469503074,"action":"Open"}
{"time":1469503075,"action":"Open"}
{"time":1469503075,"action":"Open"}
{"time":1469503082,"action":"Close"}
{"time":1469503082,"action":"Open"}
{"time":1469503086,"action":"Open"}
{"time":1469503088,"action":"Close"}
{"time":1469503088,"action":"Open"}
{"time":1469503088,"action":"Open"}
{"time":1469503097,"action":"Open"}
{"time":1469503105,"action":"Open"}
{"time":1469503106,"action":"Close"}
{"time":1469503109,"action":"Open"}
{"time":1469503109,"action":"Open"}
{"time":1469503110,"action":"Close"}
{"time":1469503116,"action":"Close"}
{"time":1469503120,"action":"Open"}
{"time":1469503125,"action":"Open"}
{"time":1469503125,"action":"Open"}
{"time":1469503126,"action":"Close"}
{"time":1469503128,"action":"Open"}
{"time":1469503128,"action":"Open"}
{"time":1469503130,"action":"Open"}
{"time":1469503133,"action":"Open"}
{"time":1469503135,"action":"Open"}
{"time":1469503136,"action":"Close"}
{"time":1469503136,"action":"Open"}
{"time":1469503139,"action":"Open"}
{"time":1469503140,"action":"Close"}
{"time":1469503140,"action":"Close"}
{"time":1469503140,"action":"Open"}
{"time":1469503143,"action":"Open"}
{"time":1469503150,"action":"Open"}
{"time":1469503151,"action":"Close"}
{"time":1469503154,"action":"Close"}
{"time":1469503158,"action":"Open"}
{"time":1469503159,"action":"Open"}
{"time":1469503160,"action":"Close"}
{"time":1469503160,"action":"Close"}
{"time":1469503161,"action":"Open"}
{"time":1469503162,"action":"Open"}
{"time":1469503166,"action":"Open"}
{"time":1469503169,"action":"Open"}
{"time":1469503173,"action":"Open"}
{"time":1469503176,"action":"Open"}
{"time":1469503184,"action":"Open"}
{"time":1469503190,"action":"Close"}
{"time":1469503190,"action":"Open"}
{"time":1469503195,"action":"Close"}
{"time":1469503195,"action":"Open"}
{"time":1469503196,"action":"Open"}
{"time":1469503198,"action":"Open"}
{"time":1469503203,"action":"Open"}
{"time":1469503206,"action":"Open"}
{"time":1469503209,"action":"Open"}
{"time":1469503211,"action":"Open"}
{"time":1469503215,"action":"Open"}
{"time":1469503224,"action":"Close"}
{"time":1469503229,"action":"Open"}
{"time":1469503231,"action":"Close"}
{"time":1469503231,"action":"Open"}
{"time":1469503231,"action":"Open"}
{"time":1469503231,"action":"Open"}
{"time":1469503234,"action":"Open"}
{"time":1469503236,"action":"Open"}
{"time":1469503246,"action":"Close"}
{"time":1469503246,"action":"Open"}
{"time":1469503248,"action":"Open"}
{"time":1469503250,"action":"Close"}
{"time":1469503255,"action":"Open"}
{"time":1469503255,"action":"Open"}
{"time":1469503259,"action":"Open"}
{"time":1469503261,"action":"Open"}
{"time":1469503262,"action":"Open"}
{"time":1469503270,"action":"Open"}
{"time":1469503277,"action":"Open"}
{"time":1469503280,"action":"Close"}
{"time":1469503281,"action":"Open"}
{"time":1469503283,"action":"Open"}
{"time":1469503287,"action":"Open"}
{"time":1469503291,"action":"Close"}
{"time":1469503291,"action":"Open"}
{"time":1469503291,"action":"Open"}
{"time":1469503292,"action":"Open"}
{"time":1469503299,"action":"Open"}
{"time":1469503301,"action":"Open"}
{"time":1469503302,"action":"Close"}
{"time":1469503305,"action":"Open"}
{"time":1469503309,"action":"Open"}
{"time":1469503316,"action":"Open"}
{"time":1469503319,"action":"Open"}
{"time":1469503319,"action":"Open"}
{"time":1469503321,"action":"Open"}
{"time":1469503325,"action":"Close"}
{"time":1469503325,"action":"Open"}
{"time":1469503328,"action":"Open"}
{"time":1469503330,"action":"Open"}
{"time":1469503334,"action":"Open"}
{"time":1469503335,"action":"Close"}
{"time":1469503335,"action":"Open"}
{"time":1469503337,"action":"Open"}
{"time":1469503344,"action":"Close"}
{"time":1469503347,"action":"Open"}
{"time":1469503348,"action":"Open"}
{"time":1469503355,"action":"Open"}
{"time":1469503356,"action":"Close"}
{"time":1469503357,"action":"Close"}
{"time":1469503359,"action":"Open"}
{"time":1469503362,"action":"Close"}
{"time":1469503362,"action":"Open"}
{"time":1469503363,"action":"Close"}
{"time":1469503365,"action":"Open"}
{"time":1469503374,"action":"Open"}
{"time":1469503377,"action":"Open"}
{"time":1469503378,"action":"Open"}
{"time":1469503378,"action":"Open"}
{"time":1469503382,"action":"Open"}
{"time":1469503383,"action":"Open"}
{"time":1469503385,"action":"Close"}
{"time":1469503386,"action":"Open"}
{"time":1469503387,"action":"Open"}
{"time":1469503392,"action":"Open"}
{"time":1469503393,"action":"Open"}
{"time":1469503398,"action":"Open"}
{"time":1469503403,"action":"Close"}
{"time":1469503406,"action":"Close"}
{"time":1469503406,"action":"Open"}
{"time":1469503407,"action":"Open"}
{"time":1469503407,"action":"Open"}
{"time":1469503408,"action":"Open"}
{"time":1469503409,"action":"Open"}
{"time":1469503411,"action":"Open"}
{"time":1469503411,"action":"Open"}
{"time":1469503415,"action":"Open"}
{"time":1469503418,"action":"Close"}
{"time":1469503418,"action":"Open"}
{"time":1469503425,"action":"Close"}
{"time":1469503426,"action":"Close"}
{"time":1469503429,"action":"Open"}
{"time":1469503430,"action":"Open"}
{"time":1469503432,"action":"Open"}
{"time":1469503437,"action":"Close"}
{"time":1469503438,"action":"Open"}
{"time":1469503445,"action":"Open"}
{"time":1469503448,"action":"Open"}
{"time":1469503449,"action":"Close"}
{"time":1469503450,"action":"Open"}
{"time":1469503455,"action":"Open"}
{"time":1469503460,"action":"Open"}
{"time":1469503463,"action":"Open"}
{"time":1469503463,"action":"Open"}
{"time":1469503466,"action":"Open"}
{"time":1469503471,"action":"Close"}
{"time":1469503474,"action":"Open"}
{"time":1469503475,"action":"Open"}
{"time":1469503477,"action":"Open"}
{"time":1469503478,"action":"Open"}
{"time":1469503482,"action":"Open"}
{"time":1469503487,"action":"Close"}
{"time":1469503490,"action":"Open"}

*** WARNING: skipped 15627 bytes of output ***

{"time":1469504646,"action":"Open"}
{"time":1469504648,"action":"Open"}
{"time":1469504653,"action":"Open"}
{"time":1469504658,"action":"Open"}
{"time":1469504658,"action":"Open"}
{"time":1469504658,"action":"Open"}
{"time":1469504661,"action":"Close"}
{"time":1469504662,"action":"Open"}
{"time":1469504662,"action":"Open"}
{"time":1469504665,"action":"Open"}
{"time":1469504668,"action":"Close"}
{"time":1469504672,"action":"Open"}
{"time":1469504675,"action":"Open"}
{"time":1469504679,"action":"Open"}
{"time":1469504686,"action":"Open"}
{"time":1469504687,"action":"Open"}
{"time":1469504696,"action":"Close"}
{"time":1469504703,"action":"Open"}
{"time":1469504710,"action":"Open"}
{"time":1469504710,"action":"Open"}
{"time":1469504710,"action":"Open"}
{"time":1469504710,"action":"Open"}
{"time":1469504717,"action":"Open"}
{"time":1469504724,"action":"Close"}
{"time":1469504731,"action":"Open"}
{"time":1469504736,"action":"Open"}
{"time":1469504739,"action":"Open"}
{"time":1469504741,"action":"Close"}
{"time":1469504742,"action":"Close"}
{"time":1469504742,"action":"Close"}
{"time":1469504743,"action":"Close"}
{"time":1469504744,"action":"Open"}
{"time":1469504745,"action":"Open"}
{"time":1469504747,"action":"Close"}
{"time":1469504748,"action":"Open"}
{"time":1469504748,"action":"Open"}
{"time":1469504748,"action":"Close"}
{"time":1469504751,"action":"Open"}
{"time":1469504752,"action":"Open"}
{"time":1469504753,"action":"Close"}
{"time":1469504754,"action":"Open"}
{"time":1469504757,"action":"Open"}
{"time":1469504757,"action":"Open"}
{"time":1469504761,"action":"Close"}
{"time":1469504762,"action":"Open"}
{"time":1469504765,"action":"Close"}
{"time":1469504765,"action":"Open"}
{"time":1469504768,"action":"Close"}
{"time":1469504779,"action":"Open"}
{"time":1469504779,"action":"Open"}
{"time":1469504780,"action":"Close"}
{"time":1469504781,"action":"Open"}
{"time":1469504782,"action":"Close"}
{"time":1469504784,"action":"Close"}
{"time":1469504786,"action":"Close"}
{"time":1469504789,"action":"Open"}
{"time":1469504789,"action":"Open"}
{"time":1469504791,"action":"Open"}
{"time":1469504792,"action":"Open"}
{"time":1469504792,"action":"Open"}
{"time":1469504793,"action":"Open"}
{"time":1469504797,"action":"Close"}
{"time":1469504802,"action":"Open"}
{"time":1469504803,"action":"Close"}
{"time":1469504803,"action":"Open"}
{"time":1469504805,"action":"Open"}
{"time":1469504807,"action":"Close"}
{"time":1469504808,"action":"Close"}
{"time":1469504809,"action":"Open"}
{"time":1469504810,"action":"Open"}
{"time":1469504811,"action":"Open"}
{"time":1469504811,"action":"Open"}
{"time":1469504815,"action":"Open"}
{"time":1469504818,"action":"Open"}
{"time":1469504819,"action":"Open"}
{"time":1469504820,"action":"Close"}
{"time":1469504820,"action":"Open"}
{"time":1469504824,"action":"Close"}
{"time":1469504825,"action":"Open"}
{"time":1469504829,"action":"Open"}
{"time":1469504834,"action":"Close"}
{"time":1469504836,"action":"Open"}
{"time":1469504840,"action":"Open"}
{"time":1469504848,"action":"Open"}
{"time":1469504853,"action":"Close"}
{"time":1469504854,"action":"Close"}
{"time":1469504855,"action":"Open"}
{"time":1469504859,"action":"Open"}
{"time":1469504860,"action":"Close"}
{"time":1469504866,"action":"Close"}
{"time":1469504873,"action":"Close"}
{"time":1469504875,"action":"Open"}
{"time":1469504881,"action":"Open"}
{"time":1469504882,"action":"Close"}
{"time":1469504886,"action":"Open"}
{"time":1469504889,"action":"Open"}
{"time":1469504890,"action":"Close"}
{"time":1469504892,"action":"Open"}
{"time":1469504897,"action":"Close"}
{"time":1469504901,"action":"Close"}
{"time":1469504902,"action":"Open"}
{"time":1469504903,"action":"Close"}
{"time":1469504903,"action":"Open"}
{"time":1469504904,"action":"Open"}
{"time":1469504905,"action":"Open"}
{"time":1469504909,"action":"Close"}
{"time":1469504909,"action":"Open"}
{"time":1469504910,"action":"Open"}
{"time":1469504911,"action":"Close"}
{"time":1469504915,"action":"Open"}
{"time":1469504916,"action":"Open"}
{"time":1469504922,"action":"Close"}
{"time":1469504926,"action":"Close"}
{"time":1469504926,"action":"Open"}
{"time":1469504929,"action":"Open"}
{"time":1469504929,"action":"Open"}
{"time":1469504931,"action":"Open"}
{"time":1469504933,"action":"Close"}
{"time":1469504935,"action":"Open"}
{"time":1469504937,"action":"Close"}
{"time":1469504937,"action":"Open"}
{"time":1469504942,"action":"Open"}
{"time":1469504943,"action":"Open"}
{"time":1469504944,"action":"Open"}
{"time":1469504946,"action":"Close"}
{"time":1469504948,"action":"Open"}
{"time":1469504958,"action":"Open"}
{"time":1469504960,"action":"Close"}
{"time":1469504960,"action":"Open"}
{"time":1469504963,"action":"Open"}
{"time":1469504964,"action":"Close"}
{"time":1469504964,"action":"Open"}
{"time":1469504967,"action":"Open"}
{"time":1469504968,"action":"Close"}
{"time":1469504971,"action":"Close"}
{"time":1469504972,"action":"Close"}
{"time":1469504974,"action":"Close"}
{"time":1469504983,"action":"Close"}
{"time":1469504983,"action":"Close"}
{"time":1469504983,"action":"Open"}
{"time":1469504984,"action":"Open"}
{"time":1469504987,"action":"Close"}
{"time":1469504989,"action":"Open"}
{"time":1469504991,"action":"Open"}
{"time":1469504993,"action":"Open"}
{"time":1469504994,"action":"Open"}
{"time":1469504998,"action":"Open"}
{"time":1469505000,"action":"Open"}
{"time":1469505005,"action":"Open"}
{"time":1469505005,"action":"Open"}
{"time":1469505007,"action":"Open"}
{"time":1469505008,"action":"Open"}
{"time":1469505010,"action":"Open"}
{"time":1469505012,"action":"Open"}
{"time":1469505013,"action":"Close"}
{"time":1469505013,"action":"Open"}
{"time":1469505013,"action":"Open"}
{"time":1469505017,"action":"Open"}
{"time":1469505020,"action":"Close"}
{"time":1469505020,"action":"Open"}
{"time":1469505021,"action":"Close"}
{"time":1469505022,"action":"Close"}
{"time":1469505022,"action":"Close"}
{"time":1469505023,"action":"Close"}
{"time":1469505029,"action":"Open"}
{"time":1469505032,"action":"Open"}
{"time":1469505033,"action":"Open"}
{"time":1469505035,"action":"Close"}
{"time":1469505039,"action":"Close"}
{"time":1469505040,"action":"Close"}
{"time":1469505040,"action":"Open"}
{"time":1469505041,"action":"Open"}
{"time":1469505046,"action":"Close"}
{"time":1469505046,"action":"Open"}
{"time":1469505047,"action":"Open"}
{"time":1469505049,"action":"Close"}
{"time":1469505050,"action":"Open"}
{"time":1469505052,"action":"Open"}
{"time":1469505057,"action":"Close"}
{"time":1469505059,"action":"Open"}
{"time":1469505062,"action":"Close"}
{"time":1469505064,"action":"Close"}
{"time":1469505069,"action":"Open"}
{"time":1469505072,"action":"Close"}
{"time":1469505072,"action":"Close"}
{"time":1469505072,"action":"Close"}
{"time":1469505074,"action":"Open"}
{"time":1469505075,"action":"Open"}
{"time":1469505076,"action":"Open"}
{"time":1469505077,"action":"Open"}
{"time":1469505081,"action":"Open"}
{"time":1469505082,"action":"Open"}
{"time":1469505085,"action":"Open"}
{"time":1469505086,"action":"Open"}
{"time":1469505086,"action":"Open"}
{"time":1469505086,"action":"Open"}
{"time":1469505098,"action":"Open"}
{"time":1469505101,"action":"Open"}
{"time":1469505102,"action":"Open"}
{"time":1469505102,"action":"Open"}
{"time":1469505106,"action":"Close"}
{"time":1469505106,"action":"Close"}
{"time":1469505111,"action":"Open"}
{"time":1469505118,"action":"Open"}
{"time":1469505120,"action":"Close"}
{"time":1469505126,"action":"Open"}
{"time":1469505128,"action":"Close"}
{"time":1469505129,"action":"Close"}
{"time":1469505129,"action":"Open"}
{"time":1469505130,"action":"Open"}
{"time":1469505130,"action":"Open"}
{"time":1469505133,"action":"Open"}
{"time":1469505139,"action":"Close"}
{"time":1469505140,"action":"Open"}
{"time":1469505155,"action":"Open"}
{"time":1469505162,"action":"Open"}
{"time":1469505163,"action":"Close"}
{"time":1469505164,"action":"Open"}
{"time":1469505166,"action":"Open"}
{"time":1469505169,"action":"Open"}
{"time":1469505170,"action":"Open"}
{"time":1469505170,"action":"Open"}
{"time":1469505172,"action":"Open"}
{"time":1469505175,"action":"Open"}
{"time":1469505176,"action":"Open"}
{"time":1469505180,"action":"Close"}
{"time":1469505180,"action":"Close"}
{"time":1469505180,"action":"Open"}
{"time":1469505183,"action":"Close"}
{"time":1469505184,"action":"Open"}
{"time":1469505184,"action":"Open"}
{"time":1469505185,"action":"Close"}
{"time":1469505185,"action":"Close"}
{"time":1469505188,"action":"Close"}
{"time":1469505191,"action":"Open"}
{"time":1469505192,"action":"Open"}
{"time":1469505194,"action":"Close"}
{"time":1469505200,"action":"Open"}
{"time":1469505201,"action":"Close"}
{"time":1469505203,"action":"Close"}
{"time":1469505204,"action":"Close"}
{"time":1469505204,"action":"Open"}
{"time":1469505207,"action":"Close"}
{"time":1469505209,"action":"Open"}
{"time":1469505211,"action":"Open"}
{"time":1469505219,"action":"Open"}
{"time":1469505222,"action":"Close"}
{"time":1469505226,"action":"Close"}
{"time":1469505229,"action":"Close"}
{"time":1469505235,"action":"Open"}
{"time":1469505237,"action":"Close"}
{"time":1469505238,"action":"Open"}
{"time":1469505239,"action":"Open"}
{"time":1469505241,"action":"Open"}
{"time":1469505246,"action":"Open"}
{"time":1469505250,"action":"Open"}
{"time":1469505250,"action":"Open"}
{"time":1469505255,"action":"Open"}
{"time":1469505255,"action":"Open"}
{"time":1469505256,"action":"Open"}
{"time":1469505259,"action":"Close"}
{"time":1469505261,"action":"Open"}
{"time":1469505261,"action":"Open"}
{"time":1469505262,"action":"Close"}
{"time":1469505263,"action":"Close"}
{"time":1469505264,"action":"Open"}
{"time":1469505265,"action":"Open"}
{"time":1469505266,"action":"Open"}
{"time":1469505266,"action":"Open"}
{"time":1469505269,"action":"Open"}
{"time":1469505269,"action":"Open"}
{"time":1469505272,"action":"Open"}
{"time":1469505273,"action":"Close"}
{"time":1469505278,"action":"Close"}
{"time":1469505278,"action":"Open"}
{"time":1469505281,"action":"Open"}
{"time":1469505283,"action":"Close"}
{"time":1469505283,"action":"Close"}
{"time":1469505286,"action":"Open"}
{"time":1469505289,"action":"Open"}
{"time":1469505291,"action":"Close"}
{"time":1469505294,"action":"Close"}
{"time":1469505295,"action":"Close"}
{"time":1469505296,"action":"Close"}
{"time":1469505300,"action":"Open"}
{"time":1469505300,"action":"Open"}
{"time":1469505301,"action":"Open"}
{"time":1469505301,"action":"Open"}
{"time":1469505303,"action":"Open"}
{"time":1469505307,"action":"Close"}
{"time":1469505307,"action":"Open"}
{"time":1469505312,"action":"Close"}
{"time":1469505320,"action":"Close"}
{"time":1469505321,"action":"Open"}
{"time":1469505328,"action":"Close"}
{"time":1469505330,"action":"Open"}
{"time":1469505332,"action":"Close"}
{"time":1469505333,"action":"Open"}
{"time":1469505335,"action":"Open"}
{"time":1469505336,"action":"Close"}
{"time":1469505336,"action":"Open"}
{"time":1469505343,"action":"Close"}
{"time":1469505344,"action":"Open"}
{"time":1469505346,"action":"Open"}
{"time":1469505349,"action":"Open"}
{"time":1469505349,"action":"Open"}
{"time":1469505351,"action":"Close"}
{"time":1469505353,"action":"Close"}
{"time":1469505353,"action":"Open"}
{"time":1469505361,"action":"Open"}
{"time":1469505363,"action":"Open"}
{"time":1469505363,"action":"Open"}
{"time":1469505370,"action":"Open"}
{"time":1469505371,"action":"Close"}
{"time":1469505372,"action":"Open"}
{"time":1469505372,"action":"Close"}
{"time":1469505375,"action":"Close"}
{"time":1469505377,"action":"Close"}
{"time":1469505378,"action":"Open"}
{"time":1469505380,"action":"Close"}
{"time":1469505384,"action":"Open"}
{"time":1469505387,"action":"Close"}
{"time":1469505389,"action":"Close"}
{"time":1469505393,"action":"Close"}
{"time":1469505393,"action":"Close"}
{"time":1469505397,"action":"Open"}
{"time":1469505406,"action":"Open"}
{"time":1469505413,"action":"Close"}
{"time":1469505414,"action":"Close"}
{"time":1469505414,"action":"Open"}
{"time":1469505414,"action":"Open"}
{"time":1469505415,"action":"Open"}
{"time":1469505416,"action":"Open"}
{"time":1469505418,"action":"Open"}
{"time":1469505421,"action":"Open"}
{"time":1469505424,"action":"Open"}
{"time":1469505428,"action":"Open"}
{"time":1469505430,"action":"Open"}
{"time":1469505443,"action":"Open"}
{"time":1469505451,"action":"Close"}
{"time":1469505460,"action":"Open"}
{"time":1469505460,"action":"Open"}
{"time":1469505462,"action":"Close"}
{"time":1469505463,"action":"Close"}
{"time":1469505464,"action":"Open"}
{"time":1469505465,"action":"Close"}
{"time":1469505465,"action":"Close"}
{"time":1469505473,"action":"Open"}
{"time":1469505474,"action":"Open"}
{"time":1469505478,"action":"Open"}
{"time":1469505480,"action":"Close"}
{"time":1469505482,"action":"Open"}
{"time":1469505484,"action":"Close"}
{"time":1469505487,"action":"Open"}
{"time":1469505488,"action":"Open"}
{"time":1469505490,"action":"Open"}
{"time":1469505498,"action":"Open"}
{"time":1469505499,"action":"Open"}
{"time":1469505504,"action":"Open"}
{"time":1469505505,"action":"Open"}
{"time":1469505509,"action":"Open"}
{"time":1469505514,"action":"Close"}
{"time":1469505515,"action":"Open"}
{"time":1469505517,"action":"Open"}
{"time":1469505523,"action":"Close"}
{"time":1469505524,"action":"Open"}
{"time":1469505524,"action":"Open"}
{"time":1469505525,"action":"Open"}
{"time":1469505526,"action":"Close"}
{"time":1469505526,"action":"Open"}
{"time":1469505527,"action":"Open"}
{"time":1469505528,"action":"Open"}
{"time":1469505531,"action":"Close"}
{"time":1469505533,"action":"Open"}
{"time":1469505534,"action":"Close"}
{"time":1469505534,"action":"Open"}
{"time":1469505535,"action":"Open"}
{"time":1469505538,"action":"Open"}
{"time":1469505538,"action":"Open"}
{"time":1469505539,"action":"Close"}
{"time":1469505539,"action":"Open"}
{"time":1469505540,"action":"Close"}
{"time":1469505542,"action":"Open"}
{"time":1469505543,"action":"Open"}
{"time":1469505544,"action":"Close"}
{"time":1469505545,"action":"Open"}
{"time":1469505550,"action":"Close"}
{"time":1469505550,"action":"Open"}
{"time":1469505551,"action":"Close"}
{"time":1469505553,"action":"Open"}
{"time":1469505555,"action":"Open"}
{"time":1469505556,"action":"Open"}
{"time":1469505557,"action":"Open"}
{"time":1469505558,"action":"Close"}
{"time":1469505558,"action":"Open"}
{"time":1469505561,"action":"Close"}
{"time":1469505563,"action":"Close"}
{"time":1469505563,"action":"Open"}
{"time":1469505564,"action":"Close"}
{"time":1469505566,"action":"Close"}
{"time":1469505567,"action":"Open"}
{"time":1469505573,"action":"Open"}
{"time":1469505574,"action":"Open"}
{"time":1469505579,"action":"Close"}
{"time":1469505582,"action":"Open"}
{"time":1469505586,"action":"Open"}
{"time":1469505588,"action":"Open"}
{"time":1469505589,"action":"Open"}
{"time":1469505590,"action":"Close"}
{"time":1469505591,"action":"Close"}
{"time":1469505591,"action":"Open"}
{"time":1469505597,"action":"Close"}
{"time":1469505597,"action":"Close"}
{"time":1469505599,"action":"Open"}
{"time":1469505601,"action":"Open"}
{"time":1469505602,"action":"Close"}
{"time":1469505612,"action":"Close"}
{"time":1469505616,"action":"Close"}
{"time":1469505616,"action":"Open"}
{"time":1469505617,"action":"Open"}
{"time":1469505619,"action":"Close"}
{"time":1469505621,"action":"Open"}
{"time":1469505624,"action":"Open"}
{"time":1469505625,"action":"Open"}
{"time":1469505626,"action":"Close"}
{"time":1469505628,"action":"Close"}
{"time":1469505629,"action":"Open"}
{"time":1469505638,"action":"Close"}
{"time":1469505640,"action":"Open"}
{"time":1469505640,"action":"Open"}
{"time":1469505650,"action":"Open"}
{"time":1469505653,"action":"Open"}
{"time":1469505661,"action":"Close"}
{"time":1469505661,"action":"Open"}
{"time":1469505663,"action":"Open"}
{"time":1469505665,"action":"Open"}
{"time":1469505668,"action":"Open"}
{"time":1469505682,"action":"Open"}
{"time":1469505686,"action":"Open"}
{"time":1469505694,"action":"Close"}
{"time":1469505695,"action":"Open"}
{"time":1469505696,"action":"Open"}
{"time":1469505700,"action":"Open"}
{"time":1469505708,"action":"Open"}
{"time":1469505711,"action":"Close"}
{"time":1469505713,"action":"Close"}
{"time":1469505715,"action":"Close"}
{"time":1469505715,"action":"Open"}
{"time":1469505718,"action":"Close"}
{"time":1469505719,"action":"Open"}
{"time":1469505723,"action":"Open"}
{"time":1469505725,"action":"Open"}
{"time":1469505728,"action":"Close"}
{"time":1469505731,"action":"Open"}
{"time":1469505733,"action":"Close"}
{"time":1469505733,"action":"Open"}
{"time":1469505735,"action":"Open"}
{"time":1469505735,"action":"Open"}
{"time":1469505736,"action":"Close"}
{"time":1469505739,"action":"Close"}
{"time":1469505739,"action":"Close"}
{"time":1469505741,"action":"Open"}
{"time":1469505741,"action":"Open"}
{"time":1469505748,"action":"Close"}
{"time":1469505748,"action":"Open"}
{"time":1469505748,"action":"Open"}
{"time":1469505749,"action":"Close"}
{"time":1469505753,"action":"Open"}
{"time":1469505754,"action":"Open"}
{"time":1469505758,"action":"Open"}
{"time":1469505758,"action":"Open"}
{"time":1469505759,"action":"Close"}
{"time":1469505759,"action":"Open"}
{"time":1469505769,"action":"Open"}
{"time":1469505770,"action":"Close"}
{"time":1469505770,"action":"Open"}
{"time":1469505775,"action":"Open"}
{"time":1469505783,"action":"Open"}
{"time":1469505787,"action":"Close"}
{"time":1469505788,"action":"Open"}
{"time":1469505793,"action":"Open"}
{"time":1469505794,"action":"Open"}
{"time":1469505797,"action":"Close"}
{"time":1469505800,"action":"Close"}
{"time":1469505801,"action":"Close"}
{"time":1469505802,"action":"Close"}
{"time":1469505803,"action":"Open"}
{"time":1469505811,"action":"Close"}
{"time":1469505812,"action":"Open"}
{"time":1469505815,"action":"Close"}
{"time":1469505820,"action":"Close"}
{"time":1469505820,"action":"Open"}
{"time":1469505824,"action":"Close"}
{"time":1469505830,"action":"Open"}
{"time":1469505832,"action":"Close"}
{"time":1469505834,"action":"Open"}
{"time":1469505835,"action":"Close"}
{"time":1469505835,"action":"Open"}
{"time":1469505836,"action":"Open"}
{"time":1469505838,"action":"Open"}
{"time":1469505839,"action":"Close"}
{"time":1469505841,"action":"Open"}
{"time":1469505842,"action":"Close"}
{"time":1469505844,"action":"Close"}
{"time":1469505851,"action":"Open"}
{"time":1469505851,"action":"Open"}
{"time":1469505854,"action":"Open"}
{"time":1469505860,"action":"Open"}
{"time":1469505863,"action":"Open"}
{"time":1469505867,"action":"Open"}
{"time":1469505873,"action":"Open"}
{"time":1469505875,"action":"Close"}
{"time":1469505875,"action":"Open"}
{"time":1469505875,"action":"Open"}
{"time":1469505877,"action":"Close"}
{"time":1469505882,"action":"Close"}
{"time":1469505886,"action":"Open"}
{"time":1469505890,"action":"Close"}
{"time":1469505892,"action":"Open"}
{"time":1469505897,"action":"Open"}
{"time":1469505902,"action":"Close"}
{"time":1469505903,"action":"Open"}
{"time":1469505904,"action":"Open"}
{"time":1469505904,"action":"Open"}
{"time":1469505905,"action":"Close"}
{"time":1469505905,"action":"Open"}
{"time":1469505905,"action":"Open"}
{"time":1469505907,"action":"Close"}
{"time":1469505907,"action":"Open"}
{"time":1469505910,"action":"Open"}
{"time":1469505913,"action":"Open"}
{"time":1469505918,"action":"Close"}
{"time":1469505919,"action":"Open"}
{"time":1469505920,"action":"Open"}
{"time":1469505922,"action":"Open"}
{"time":1469505923,"action":"Close"}
{"time":1469505924,"action":"Open"}
{"time":1469505927,"action":"Open"}
{"time":1469505927,"action":"Open"}
{"time":1469505929,"action":"Open"}
{"time":1469505933,"action":"Open"}
{"time":1469505935,"action":"Open"}
{"time":1469505936,"action":"Close"}
{"time":1469505937,"action":"Close"}
{"time":1469505937,"action":"Open"}
{"time":1469505938,"action":"Open"}
{"time":1469505939,"action":"Close"}
{"time":1469505941,"action":"Open"}
{"time":1469505942,"action":"Close"}
{"time":1469505944,"action":"Open"}
{"time":1469505947,"action":"Close"}
{"time":1469505954,"action":"Close"}
{"time":1469505954,"action":"Open"}
{"time":1469505955,"action":"Close"}
{"time":1469505958,"action":"Open"}
{"time":1469505959,"action":"Close"}
{"time":1469505961,"action":"Close"}
{"time":1469505966,"action":"Open"}
{"time":1469505966,"action":"Open"}
{"time":1469505967,"action":"Close"}
{"time":1469505969,"action":"Open"}
{"time":1469505970,"action":"Close"}
{"time":1469505970,"action":"Open"}
{"time":1469505972,"action":"Close"}
{"time":1469505972,"action":"Close"}
{"time":1469505975,"action":"Close"}
{"time":1469505977,"action":"Close"}
{"time":1469505977,"action":"Open"}
{"time":1469505979,"action":"Open"}
{"time":1469505980,"action":"Open"}
{"time":1469505986,"action":"Open"}
{"time":1469505987,"action":"Open"}
{"time":1469505987,"action":"Open"}
{"time":1469505990,"action":"Open"}
{"time":1469505990,"action":"Open"}
{"time":1469505990,"action":"Open"}
{"time":1469505991,"action":"Open"}
{"time":1469505992,"action":"Open"}
{"time":1469505998,"action":"Open"}
{"time":1469506000,"action":"Open"}
{"time":1469506002,"action":"Close"}
{"time":1469506004,"action":"Open"}
{"time":1469506005,"action":"Close"}
{"time":1469506005,"action":"Close"}
{"time":1469506005,"action":"Open"}
{"time":1469506006,"action":"Close"}
{"time":1469506006,"action":"Close"}
{"time":1469506006,"action":"Open"}
{"time":1469506010,"action":"Open"}
{"time":1469506012,"action":"Open"}
{"time":1469506022,"action":"Close"}
{"time":1469506022,"action":"Open"}
{"time":1469506025,"action":"Open"}
{"time":1469506028,"action":"Open"}
{"time":1469506030,"action":"Open"}
{"time":1469506030,"action":"Open"}
{"time":1469506032,"action":"Open"}
{"time":1469506032,"action":"Open"}
{"time":1469506033,"action":"Close"}
{"time":1469506033,"action":"Open"}
{"time":1469506035,"action":"Close"}
{"time":1469506036,"action":"Close"}
{"time":1469506038,"action":"Open"}
{"time":1469506041,"action":"Open"}
{"time":1469506044,"action":"Close"}
{"time":1469506046,"action":"Open"}
{"time":1469506046,"action":"Open"}
{"time":1469506047,"action":"Close"}
{"time":1469506047,"action":"Open"}
{"time":1469506049,"action":"Open"}
{"time":1469506050,"action":"Close"}
{"time":1469506051,"action":"Close"}
{"time":1469506053,"action":"Open"}
{"time":1469506055,"action":"Close"}
{"time":1469506056,"action":"Open"}
{"time":1469506056,"action":"Open"}
{"time":1469506058,"action":"Open"}
{"time":1469506060,"action":"Open"}
{"time":1469506063,"action":"Open"}
{"time":1469506070,"action":"Close"}
{"time":1469506070,"action":"Open"}
{"time":1469506072,"action":"Open"}
{"time":1469506074,"action":"Open"}
{"time":1469506081,"action":"Close"}
{"time":1469506081,"action":"Open"}
{"time":1469506081,"action":"Open"}
{"time":1469506083,"action":"Open"}
{"time":1469506083,"action":"Open"}
{"time":1469506085,"action":"Close"}
{"time":1469506085,"action":"Open"}
{"time":1469506091,"action":"Close"}
{"time":1469506095,"action":"Open"}
{"time":1469506096,"action":"Close"}
{"time":1469506097,"action":"Close"}
{"time":1469506099,"action":"Close"}
{"time":1469506107,"action":"Close"}
{"time":1469506109,"action":"Close"}
{"time":1469506110,"action":"Close"}
{"time":1469506110,"action":"Open"}
{"time":1469506111,"action":"Open"}
{"time":1469506113,"action":"Open"}
{"time":1469506114,"action":"Open"}
{"time":1469506114,"action":"Open"}
{"time":1469506115,"action":"Open"}
{"time":1469506116,"action":"Close"}
{"time":1469506124,"action":"Open"}
{"time":1469506125,"action":"Close"}
{"time":1469506129,"action":"Open"}
{"time":1469506130,"action":"Open"}
{"time":1469506133,"action":"Close"}
{"time":1469506135,"action":"Open"}
{"time":1469506135,"action":"Open"}
{"time":1469506137,"action":"Close"}
{"time":1469506140,"action":"Open"}
{"time":1469506144,"action":"Open"}
{"time":1469506148,"action":"Open"}
{"time":1469506150,"action":"Open"}
{"time":1469506153,"action":"Open"}
{"time":1469506154,"action":"Open"}
{"time":1469506155,"action":"Close"}
{"time":1469506159,"action":"Open"}
{"time":1469506160,"action":"Open"}
{"time":1469506161,"action":"Open"}
{"time":1469506165,"action":"Open"}
{"time":1469506166,"action":"Close"}
{"time":1469506167,"action":"Open"}
{"time":1469506173,"action":"Open"}
{"time":1469506174,"action":"Open"}
{"time":1469506176,"action":"Close"}
{"time":1469506178,"action":"Close"}
{"time":1469506180,"action":"Close"}
{"time":1469506186,"action":"Close"}
{"time":1469506186,"action":"Close"}
{"time":1469506187,"action":"Open"}
{"time":1469506189,"action":"Close"}
{"time":1469506207,"action":"Close"}
{"time":1469506207,"action":"Open"}
{"time":1469506216,"action":"Open"}
{"time":1469506218,"action":"Open"}
{"time":1469506220,"action":"Close"}
{"time":1469506220,"action":"Open"}
{"time":1469506221,"action":"Open"}
{"time":1469506225,"action":"Open"}
{"time":1469506227,"action":"Close"}
{"time":1469506228,"action":"Open"}
{"time":1469506233,"action":"Open"}
{"time":1469506234,"action":"O

Each line in the files contain a JSON record with two fields - time and action. Let's try to analyze these files interactively.

Batch/Interactive Processing

The usual first step in attempting to process the data is to interactively query the data. Let's define a static DataFrame on the files, and give it a table name.

import org.apache.spark.sql.types._

val inputPath = "/databricks-datasets/structured-streaming/events/"

// Since we know the data format already, let's define the schema to speed up processing (no need for Spark to infer schema)
val jsonSchema = new StructType().add("time", TimestampType).add("action", StringType)

val staticInputDF = 
  spark
    .read
    .schema(jsonSchema)
    .json(inputPath)

display(staticInputDF)
time action
2016-07-28T04:19:28.000+0000 Close
2016-07-28T04:19:28.000+0000 Close
2016-07-28T04:19:29.000+0000 Open
2016-07-28T04:19:31.000+0000 Close
2016-07-28T04:19:31.000+0000 Open
2016-07-28T04:19:31.000+0000 Open
2016-07-28T04:19:32.000+0000 Close
2016-07-28T04:19:33.000+0000 Close
2016-07-28T04:19:35.000+0000 Close
2016-07-28T04:19:36.000+0000 Open
2016-07-28T04:19:38.000+0000 Close
2016-07-28T04:19:40.000+0000 Open
2016-07-28T04:19:41.000+0000 Close
2016-07-28T04:19:42.000+0000 Open
2016-07-28T04:19:45.000+0000 Open
2016-07-28T04:19:47.000+0000 Open
2016-07-28T04:19:48.000+0000 Open
2016-07-28T04:19:49.000+0000 Open
2016-07-28T04:19:55.000+0000 Open
2016-07-28T04:20:00.000+0000 Close
2016-07-28T04:20:00.000+0000 Open
2016-07-28T04:20:01.000+0000 Open
2016-07-28T04:20:03.000+0000 Close
2016-07-28T04:20:07.000+0000 Open
2016-07-28T04:20:11.000+0000 Open
2016-07-28T04:20:12.000+0000 Close
2016-07-28T04:20:12.000+0000 Open
2016-07-28T04:20:13.000+0000 Close
2016-07-28T04:20:16.000+0000 Open
2016-07-28T04:20:23.000+0000 Close
2016-07-28T04:20:23.000+0000 Close
2016-07-28T04:20:23.000+0000 Open
2016-07-28T04:20:26.000+0000 Close
2016-07-28T04:20:30.000+0000 Close
2016-07-28T04:20:32.000+0000 Open
2016-07-28T04:20:32.000+0000 Open
2016-07-28T04:20:34.000+0000 Close
2016-07-28T04:20:36.000+0000 Open
2016-07-28T04:20:42.000+0000 Close
2016-07-28T04:20:42.000+0000 Open
2016-07-28T04:20:42.000+0000 Open
2016-07-28T04:20:48.000+0000 Close
2016-07-28T04:20:48.000+0000 Close
2016-07-28T04:20:48.000+0000 Open
2016-07-28T04:20:50.000+0000 Open
2016-07-28T04:20:52.000+0000 Open
2016-07-28T04:20:55.000+0000 Open
2016-07-28T04:20:55.000+0000 Open
2016-07-28T04:20:56.000+0000 Close
2016-07-28T04:20:56.000+0000 Close
2016-07-28T04:20:56.000+0000 Open
2016-07-28T04:20:59.000+0000 Open
2016-07-28T04:20:59.000+0000 Open
2016-07-28T04:21:02.000+0000 Close
2016-07-28T04:21:04.000+0000 Open
2016-07-28T04:21:08.000+0000 Open
2016-07-28T04:21:11.000+0000 Close
2016-07-28T04:21:13.000+0000 Open
2016-07-28T04:21:18.000+0000 Open
2016-07-28T04:21:19.000+0000 Open
2016-07-28T04:21:20.000+0000 Close
2016-07-28T04:21:22.000+0000 Open
2016-07-28T04:21:23.000+0000 Open
2016-07-28T04:21:27.000+0000 Close
2016-07-28T04:21:28.000+0000 Open
2016-07-28T04:21:31.000+0000 Close
2016-07-28T04:21:32.000+0000 Open
2016-07-28T04:21:32.000+0000 Open
2016-07-28T04:21:33.000+0000 Open
2016-07-28T04:21:34.000+0000 Close
2016-07-28T04:21:34.000+0000 Open
2016-07-28T04:21:35.000+0000 Close
2016-07-28T04:21:37.000+0000 Open
2016-07-28T04:21:38.000+0000 Close
2016-07-28T04:21:44.000+0000 Open
2016-07-28T04:21:46.000+0000 Close
2016-07-28T04:21:46.000+0000 Open
2016-07-28T04:21:48.000+0000 Close
2016-07-28T04:21:49.000+0000 Open
2016-07-28T04:21:50.000+0000 Close
2016-07-28T04:21:52.000+0000 Close
2016-07-28T04:21:52.000+0000 Open
2016-07-28T04:21:52.000+0000 Open
2016-07-28T04:21:53.000+0000 Open
2016-07-28T04:21:53.000+0000 Open
2016-07-28T04:21:56.000+0000 Close
2016-07-28T04:21:56.000+0000 Close
2016-07-28T04:21:57.000+0000 Close
2016-07-28T04:21:58.000+0000 Open
2016-07-28T04:21:59.000+0000 Close
2016-07-28T04:22:01.000+0000 Open
2016-07-28T04:22:06.000+0000 Close
2016-07-28T04:22:10.000+0000 Open
2016-07-28T04:22:11.000+0000 Close
2016-07-28T04:22:12.000+0000 Open
2016-07-28T04:22:14.000+0000 Close
2016-07-28T04:22:15.000+0000 Close
2016-07-28T04:22:15.000+0000 Open
2016-07-28T04:22:19.000+0000 Close
2016-07-28T04:22:20.000+0000 Close
2016-07-28T04:22:24.000+0000 Close
2016-07-28T04:22:24.000+0000 Close
2016-07-28T04:22:24.000+0000 Open
2016-07-28T04:22:30.000+0000 Open
2016-07-28T04:22:31.000+0000 Close
2016-07-28T04:22:31.000+0000 Open
2016-07-28T04:22:33.000+0000 Close
2016-07-28T04:22:39.000+0000 Open
2016-07-28T04:22:40.000+0000 Close
2016-07-28T04:22:43.000+0000 Open
2016-07-28T04:22:44.000+0000 Open
2016-07-28T04:22:49.000+0000 Close
2016-07-28T04:22:50.000+0000 Open
2016-07-28T04:22:55.000+0000 Open
2016-07-28T04:22:56.000+0000 Open
2016-07-28T04:22:58.000+0000 Open
2016-07-28T04:23:00.000+0000 Close
2016-07-28T04:23:00.000+0000 Open
2016-07-28T04:23:06.000+0000 Close
2016-07-28T04:23:07.000+0000 Close
2016-07-28T04:23:07.000+0000 Open
2016-07-28T04:23:08.000+0000 Open
2016-07-28T04:23:10.000+0000 Open
2016-07-28T04:23:11.000+0000 Close
2016-07-28T04:23:12.000+0000 Open
2016-07-28T04:23:13.000+0000 Open
2016-07-28T04:23:17.000+0000 Close
2016-07-28T04:23:23.000+0000 Open
2016-07-28T04:23:28.000+0000 Open
2016-07-28T04:23:29.000+0000 Close
2016-07-28T04:23:29.000+0000 Open
2016-07-28T04:23:31.000+0000 Open
2016-07-28T04:23:32.000+0000 Open
2016-07-28T04:23:40.000+0000 Close
2016-07-28T04:23:40.000+0000 Close
2016-07-28T04:23:41.000+0000 Open
2016-07-28T04:23:42.000+0000 Close
2016-07-28T04:23:43.000+0000 Close
2016-07-28T04:23:43.000+0000 Open
2016-07-28T04:23:44.000+0000 Open
2016-07-28T04:23:46.000+0000 Close
2016-07-28T04:23:48.000+0000 Open
2016-07-28T04:23:49.000+0000 Open
2016-07-28T04:23:52.000+0000 Close
2016-07-28T04:23:53.000+0000 Close
2016-07-28T04:23:54.000+0000 Close
2016-07-28T04:23:55.000+0000 Close
2016-07-28T04:24:05.000+0000 Close
2016-07-28T04:24:07.000+0000 Close
2016-07-28T04:24:09.000+0000 Close
2016-07-28T04:24:14.000+0000 Close
2016-07-28T04:24:14.000+0000 Close
2016-07-28T04:24:14.000+0000 Open
2016-07-28T04:24:18.000+0000 Close
2016-07-28T04:24:18.000+0000 Open
2016-07-28T04:24:20.000+0000 Close
2016-07-28T04:24:21.000+0000 Close
2016-07-28T04:24:21.000+0000 Open
2016-07-28T04:24:22.000+0000 Open
2016-07-28T04:24:23.000+0000 Open
2016-07-28T04:24:24.000+0000 Close
2016-07-28T04:24:24.000+0000 Close
2016-07-28T04:24:24.000+0000 Open
2016-07-28T04:24:25.000+0000 Open
2016-07-28T04:24:30.000+0000 Close
2016-07-28T04:24:34.000+0000 Close
2016-07-28T04:24:35.000+0000 Open
2016-07-28T04:24:36.000+0000 Open
2016-07-28T04:24:39.000+0000 Close
2016-07-28T04:24:39.000+0000 Open
2016-07-28T04:24:41.000+0000 Open
2016-07-28T04:24:42.000+0000 Open
2016-07-28T04:24:43.000+0000 Close
2016-07-28T04:24:43.000+0000 Close
2016-07-28T04:24:44.000+0000 Open
2016-07-28T04:24:47.000+0000 Close
2016-07-28T04:24:47.000+0000 Close
2016-07-28T04:24:51.000+0000 Open
2016-07-28T04:24:52.000+0000 Close
2016-07-28T04:24:53.000+0000 Open
2016-07-28T04:24:54.000+0000 Open
2016-07-28T04:24:56.000+0000 Open
2016-07-28T04:24:56.000+0000 Open
2016-07-28T04:24:59.000+0000 Close
2016-07-28T04:24:59.000+0000 Open
2016-07-28T04:25:03.000+0000 Close
2016-07-28T04:25:03.000+0000 Open
2016-07-28T04:25:03.000+0000 Open
2016-07-28T04:25:04.000+0000 Close
2016-07-28T04:25:06.000+0000 Open
2016-07-28T04:25:08.000+0000 Open
2016-07-28T04:25:08.000+0000 Open
2016-07-28T04:25:09.000+0000 Close
2016-07-28T04:25:10.000+0000 Close
2016-07-28T04:25:10.000+0000 Close
2016-07-28T04:25:15.000+0000 Open
2016-07-28T04:25:16.000+0000 Close
2016-07-28T04:25:20.000+0000 Close
2016-07-28T04:25:21.000+0000 Close
2016-07-28T04:25:21.000+0000 Close
2016-07-28T04:25:21.000+0000 Open
2016-07-28T04:25:22.000+0000 Close
2016-07-28T04:25:22.000+0000 Open
2016-07-28T04:25:23.000+0000 Close
2016-07-28T04:25:23.000+0000 Close
2016-07-28T04:25:23.000+0000 Open
2016-07-28T04:25:25.000+0000 Close
2016-07-28T04:25:25.000+0000 Close
2016-07-28T04:25:27.000+0000 Close
2016-07-28T04:25:27.000+0000 Open
2016-07-28T04:25:29.000+0000 Open
2016-07-28T04:25:30.000+0000 Close
2016-07-28T04:25:42.000+0000 Close
2016-07-28T04:25:43.000+0000 Close
2016-07-28T04:25:44.000+0000 Open
2016-07-28T04:25:49.000+0000 Close
2016-07-28T04:25:49.000+0000 Close
2016-07-28T04:25:52.000+0000 Open
2016-07-28T04:25:55.000+0000 Open
2016-07-28T04:25:56.000+0000 Close
2016-07-28T04:25:57.000+0000 Close
2016-07-28T04:25:57.000+0000 Open
2016-07-28T04:25:58.000+0000 Open
2016-07-28T04:26:04.000+0000 Open
2016-07-28T04:26:05.000+0000 Close
2016-07-28T04:26:08.000+0000 Close
2016-07-28T04:26:15.000+0000 Close
2016-07-28T04:26:16.000+0000 Close
2016-07-28T04:26:18.000+0000 Open
2016-07-28T04:26:20.000+0000 Close
2016-07-28T04:26:22.000+0000 Close
2016-07-28T04:26:22.000+0000 Close
2016-07-28T04:26:22.000+0000 Close
2016-07-28T04:26:24.000+0000 Open
2016-07-28T04:26:24.000+0000 Open
2016-07-28T04:26:26.000+0000 Open
2016-07-28T04:26:30.000+0000 Close
2016-07-28T04:26:30.000+0000 Open
2016-07-28T04:26:31.000+0000 Close
2016-07-28T04:26:32.000+0000 Close
2016-07-28T04:26:32.000+0000 Close
2016-07-28T04:26:32.000+0000 Open
2016-07-28T04:26:34.000+0000 Close
2016-07-28T04:26:35.000+0000 Open
2016-07-28T04:26:36.000+0000 Close
2016-07-28T04:26:36.000+0000 Open
2016-07-28T04:26:45.000+0000 Close
2016-07-28T04:26:45.000+0000 Close
2016-07-28T04:26:46.000+0000 Open
2016-07-28T04:26:47.000+0000 Close
2016-07-28T04:26:50.000+0000 Close
2016-07-28T04:26:50.000+0000 Open
2016-07-28T04:26:54.000+0000 Close
2016-07-28T04:26:54.000+0000 Open
2016-07-28T04:26:57.000+0000 Open
2016-07-28T04:26:58.000+0000 Close
2016-07-28T04:26:58.000+0000 Open
2016-07-28T04:26:59.000+0000 Close
2016-07-28T04:27:05.000+0000 Close
2016-07-28T04:27:06.000+0000 Open
2016-07-28T04:27:15.000+0000 Close
2016-07-28T04:27:18.000+0000 Close
2016-07-28T04:27:18.000+0000 Open
2016-07-28T04:27:19.000+0000 Open
2016-07-28T04:27:20.000+0000 Close
2016-07-28T04:27:21.000+0000 Open
2016-07-28T04:27:24.000+0000 Close
2016-07-28T04:27:30.000+0000 Close
2016-07-28T04:27:31.000+0000 Close
2016-07-28T04:27:31.000+0000 Open
2016-07-28T04:27:34.000+0000 Open
2016-07-28T04:27:34.000+0000 Open
2016-07-28T04:27:35.000+0000 Close
2016-07-28T04:27:36.000+0000 Close
2016-07-28T04:27:36.000+0000 Close
2016-07-28T04:27:41.000+0000 Close
2016-07-28T04:27:42.000+0000 Close
2016-07-28T04:27:44.000+0000 Close
2016-07-28T04:27:45.000+0000 Open
2016-07-28T04:27:45.000+0000 Open
2016-07-28T04:27:55.000+0000 Open
2016-07-28T04:27:58.000+0000 Open
2016-07-28T04:28:03.000+0000 Close
2016-07-28T04:28:04.000+0000 Close
2016-07-28T04:28:04.000+0000 Open
2016-07-28T04:28:05.000+0000 Close
2016-07-28T04:28:09.000+0000 Open
2016-07-28T04:28:11.000+0000 Close
2016-07-28T04:28:11.000+0000 Open
2016-07-28T04:28:12.000+0000 Open
2016-07-28T04:28:15.000+0000 Close
2016-07-28T04:28:15.000+0000 Open
2016-07-28T04:28:15.000+0000 Open
2016-07-28T04:28:18.000+0000 Close
2016-07-28T04:28:19.000+0000 Open
2016-07-28T04:28:20.000+0000 Open
2016-07-28T04:28:28.000+0000 Open
2016-07-28T04:28:31.000+0000 Close
2016-07-28T04:28:31.000+0000 Open
2016-07-28T04:28:32.000+0000 Open
2016-07-28T04:28:33.000+0000 Open
2016-07-28T04:28:37.000+0000 Close
2016-07-28T04:28:40.000+0000 Close
2016-07-28T04:28:40.000+0000 Open
2016-07-28T04:28:42.000+0000 Open
2016-07-28T04:28:47.000+0000 Close
2016-07-28T04:28:50.000+0000 Close
2016-07-28T04:28:50.000+0000 Open
2016-07-28T04:28:55.000+0000 Close
2016-07-28T04:28:56.000+0000 Open
2016-07-28T04:28:58.000+0000 Open
2016-07-28T04:28:59.000+0000 Close
2016-07-28T04:29:02.000+0000 Close
2016-07-28T04:29:02.000+0000 Open
2016-07-28T04:29:03.000+0000 Close
2016-07-28T04:29:03.000+0000 Close
2016-07-28T04:29:10.000+0000 Close
2016-07-28T04:29:10.000+0000 Close
2016-07-28T04:29:10.000+0000 Close
2016-07-28T04:29:12.000+0000 Close
2016-07-28T04:29:14.000+0000 Close
2016-07-28T04:29:15.000+0000 Close
2016-07-28T04:29:16.000+0000 Close
2016-07-28T04:29:17.000+0000 Close
2016-07-28T04:29:20.000+0000 Close
2016-07-28T04:29:22.000+0000 Close
2016-07-28T04:29:23.000+0000 Open
2016-07-28T04:29:28.000+0000 Close
2016-07-28T04:29:31.000+0000 Close
2016-07-28T04:29:31.000+0000 Open
2016-07-28T04:29:31.000+0000 Open
2016-07-28T04:29:33.000+0000 Close
2016-07-28T04:29:35.000+0000 Close
2016-07-28T04:29:39.000+0000 Open
2016-07-28T04:29:41.000+0000 Open
2016-07-28T04:29:42.000+0000 Close
2016-07-28T04:29:43.000+0000 Close
2016-07-28T04:29:46.000+0000 Close
2016-07-28T04:29:49.000+0000 Close
2016-07-28T04:29:49.000+0000 Close
2016-07-28T04:29:50.000+0000 Open
2016-07-28T04:29:54.000+0000 Close
2016-07-28T04:29:54.000+0000 Open
2016-07-28T04:29:57.000+0000 Open
2016-07-28T04:29:57.000+0000 Open
2016-07-28T04:29:58.000+0000 Close
2016-07-28T04:29:59.000+0000 Close
2016-07-28T04:29:59.000+0000 Open
2016-07-28T04:30:00.000+0000 Open
2016-07-28T04:30:05.000+0000 Open
2016-07-28T04:30:06.000+0000 Close
2016-07-28T04:30:06.000+0000 Open
2016-07-28T04:30:07.000+0000 Close
2016-07-28T04:30:07.000+0000 Open
2016-07-28T04:30:08.000+0000 Close
2016-07-28T04:30:09.000+0000 Open
2016-07-28T04:30:11.000+0000 Open
2016-07-28T04:30:13.000+0000 Open
2016-07-28T04:30:14.000+0000 Close
2016-07-28T04:30:14.000+0000 Open
2016-07-28T04:30:15.000+0000 Open
2016-07-28T04:30:17.000+0000 Open
2016-07-28T04:30:20.000+0000 Open
2016-07-28T04:30:21.000+0000 Open
2016-07-28T04:30:22.000+0000 Close
2016-07-28T04:30:22.000+0000 Open
2016-07-28T04:30:24.000+0000 Open
2016-07-28T04:30:27.000+0000 Close
2016-07-28T04:30:28.000+0000 Close
2016-07-28T04:30:32.000+0000 Open
2016-07-28T04:30:38.000+0000 Close
2016-07-28T04:30:42.000+0000 Close
2016-07-28T04:30:43.000+0000 Open
2016-07-28T04:30:46.000+0000 Open
2016-07-28T04:30:47.000+0000 Open
2016-07-28T04:30:48.000+0000 Close
2016-07-28T04:30:53.000+0000 Open
2016-07-28T04:30:54.000+0000 Close
2016-07-28T04:30:56.000+0000 Open
2016-07-28T04:30:56.000+0000 Open
2016-07-28T04:30:57.000+0000 Close
2016-07-28T04:30:57.000+0000 Close
2016-07-28T04:31:00.000+0000 Open
2016-07-28T04:31:02.000+0000 Open
2016-07-28T04:31:03.000+0000 Close
2016-07-28T04:31:03.000+0000 Open
2016-07-28T04:31:05.000+0000 Open
2016-07-28T04:31:06.000+0000 Open
2016-07-28T04:31:09.000+0000 Close
2016-07-28T04:31:11.000+0000 Open
2016-07-28T04:31:15.000+0000 Open
2016-07-28T04:31:19.000+0000 Close
2016-07-28T04:31:21.000+0000 Close
2016-07-28T04:31:23.000+0000 Open
2016-07-28T04:31:26.000+0000 Open
2016-07-28T04:31:30.000+0000 Close
2016-07-28T04:31:31.000+0000 Open
2016-07-28T04:31:37.000+0000 Close
2016-07-28T04:31:38.000+0000 Close
2016-07-28T04:31:39.000+0000 Close
2016-07-28T04:31:39.000+0000 Open
2016-07-28T04:31:40.000+0000 Close
2016-07-28T04:31:40.000+0000 Close
2016-07-28T04:31:40.000+0000 Open
2016-07-28T04:31:42.000+0000 Close
2016-07-28T04:31:42.000+0000 Close
2016-07-28T04:31:43.000+0000 Close
2016-07-28T04:31:45.000+0000 Close
2016-07-28T04:31:45.000+0000 Open
2016-07-28T04:31:49.000+0000 Close
2016-07-28T04:31:49.000+0000 Open
2016-07-28T04:31:53.000+0000 Close
2016-07-28T04:31:53.000+0000 Open
2016-07-28T04:32:00.000+0000 Open
2016-07-28T04:32:01.000+0000 Close
2016-07-28T04:32:02.000+0000 Close
2016-07-28T04:32:06.000+0000 Open
2016-07-28T04:32:07.000+0000 Close
2016-07-28T04:32:08.000+0000 Close
2016-07-28T04:32:08.000+0000 Open
2016-07-28T04:32:11.000+0000 Open
2016-07-28T04:32:11.000+0000 Open
2016-07-28T04:32:13.000+0000 Open
2016-07-28T04:32:14.000+0000 Close
2016-07-28T04:32:15.000+0000 Close
2016-07-28T04:32:15.000+0000 Open
2016-07-28T04:32:16.000+0000 Close
2016-07-28T04:32:16.000+0000 Close
2016-07-28T04:32:22.000+0000 Close
2016-07-28T04:32:22.000+0000 Open
2016-07-28T04:32:25.000+0000 Open
2016-07-28T04:32:26.000+0000 Close
2016-07-28T04:32:26.000+0000 Open
2016-07-28T04:32:26.000+0000 Open
2016-07-28T04:32:28.000+0000 Open
2016-07-28T04:32:30.000+0000 Close
2016-07-28T04:32:34.000+0000 Open
2016-07-28T04:32:41.000+0000 Close
2016-07-28T04:32:44.000+0000 Open
2016-07-28T04:32:45.000+0000 Open
2016-07-28T04:32:46.000+0000 Open
2016-07-28T04:32:47.000+0000 Open
2016-07-28T04:32:49.000+0000 Close
2016-07-28T04:32:50.000+0000 Open
2016-07-28T04:32:52.000+0000 Close
2016-07-28T04:32:55.000+0000 Open
2016-07-28T04:32:56.000+0000 Open
2016-07-28T04:32:57.000+0000 Open
2016-07-28T04:32:59.000+0000 Close
2016-07-28T04:32:59.000+0000 Open
2016-07-28T04:33:00.000+0000 Close
2016-07-28T04:33:01.000+0000 Close
2016-07-28T04:33:02.000+0000 Close
2016-07-28T04:33:02.000+0000 Open
2016-07-28T04:33:07.000+0000 Close
2016-07-28T04:33:07.000+0000 Close
2016-07-28T04:33:07.000+0000 Open
2016-07-28T04:33:07.000+0000 Open
2016-07-28T04:33:08.000+0000 Open
2016-07-28T04:33:09.000+0000 Open
2016-07-28T04:33:11.000+0000 Close
2016-07-28T04:33:11.000+0000 Open
2016-07-28T04:33:13.000+0000 Close
2016-07-28T04:33:13.000+0000 Close
2016-07-28T04:33:14.000+0000 Close
2016-07-28T04:33:19.000+0000 Open
2016-07-28T04:33:19.000+0000 Open
2016-07-28T04:33:23.000+0000 Close
2016-07-28T04:33:25.000+0000 Close
2016-07-28T04:33:26.000+0000 Close
2016-07-28T04:33:28.000+0000 Open
2016-07-28T04:33:29.000+0000 Open
2016-07-28T04:33:31.000+0000 Close
2016-07-28T04:33:32.000+0000 Open
2016-07-28T04:33:34.000+0000 Open
2016-07-28T04:33:37.000+0000 Close
2016-07-28T04:33:39.000+0000 Open
2016-07-28T04:33:40.000+0000 Open
2016-07-28T04:33:41.000+0000 Open
2016-07-28T04:33:42.000+0000 Open
2016-07-28T04:33:45.000+0000 Open
2016-07-28T04:33:50.000+0000 Open
2016-07-28T04:34:00.000+0000 Close
2016-07-28T04:34:05.000+0000 Close
2016-07-28T04:34:05.000+0000 Open
2016-07-28T04:34:06.000+0000 Close
2016-07-28T04:34:08.000+0000 Open
2016-07-28T04:34:14.000+0000 Open
2016-07-28T04:34:15.000+0000 Close
2016-07-28T04:34:17.000+0000 Close
2016-07-28T04:34:18.000+0000 Close
2016-07-28T04:34:19.000+0000 Close
2016-07-28T04:34:20.000+0000 Open
2016-07-28T04:34:21.000+0000 Close
2016-07-28T04:34:21.000+0000 Close
2016-07-28T04:34:28.000+0000 Close
2016-07-28T04:34:28.000+0000 Close
2016-07-28T04:34:29.000+0000 Close
2016-07-28T04:34:29.000+0000 Open
2016-07-28T04:34:41.000+0000 Open
2016-07-28T04:34:44.000+0000 Open
2016-07-28T04:34:46.000+0000 Open
2016-07-28T04:34:48.000+0000 Close
2016-07-28T04:34:49.000+0000 Open
2016-07-28T04:34:51.000+0000 Open
2016-07-28T04:34:54.000+0000 Close
2016-07-28T04:34:55.000+0000 Open
2016-07-28T04:34:56.000+0000 Close
2016-07-28T04:35:00.000+0000 Open
2016-07-28T04:35:04.000+0000 Open
2016-07-28T04:35:07.000+0000 Close
2016-07-28T04:35:08.000+0000 Open
2016-07-28T04:35:09.000+0000 Close
2016-07-28T04:35:15.000+0000 Close
2016-07-28T04:35:16.000+0000 Open
2016-07-28T04:35:19.000+0000 Close
2016-07-28T04:35:20.000+0000 Open
2016-07-28T04:35:22.000+0000 Open
2016-07-28T04:35:26.000+0000 Open
2016-07-28T04:35:28.000+0000 Open
2016-07-28T04:35:31.000+0000 Close
2016-07-28T04:35:31.000+0000 Close
2016-07-28T04:35:31.000+0000 Open
2016-07-28T04:35:36.000+0000 Open
2016-07-28T04:35:36.000+0000 Open
2016-07-28T04:35:37.000+0000 Open
2016-07-28T04:35:38.000+0000 Close
2016-07-28T04:35:39.000+0000 Close
2016-07-28T04:35:45.000+0000 Open
2016-07-28T04:35:51.000+0000 Close
2016-07-28T04:35:55.000+0000 Open
2016-07-28T04:35:59.000+0000 Open
2016-07-28T04:36:00.000+0000 Close
2016-07-28T04:36:00.000+0000 Open
2016-07-28T04:36:02.000+0000 Close
2016-07-28T04:36:03.000+0000 Close
2016-07-28T04:36:05.000+0000 Close
2016-07-28T04:36:05.000+0000 Open
2016-07-28T04:36:14.000+0000 Open
2016-07-28T04:36:15.000+0000 Open
2016-07-28T04:36:16.000+0000 Open
2016-07-28T04:36:17.000+0000 Open
2016-07-28T04:36:18.000+0000 Open
2016-07-28T04:36:23.000+0000 Close
2016-07-28T04:36:23.000+0000 Open
2016-07-28T04:36:25.000+0000 Open
2016-07-28T04:36:29.000+0000 Close
2016-07-28T04:36:29.000+0000 Open
2016-07-28T04:36:32.000+0000 Close
2016-07-28T04:36:33.000+0000 Open
2016-07-28T04:36:34.000+0000 Open
2016-07-28T04:36:36.000+0000 Close
2016-07-28T04:36:36.000+0000 Open
2016-07-28T04:36:36.000+0000 Open
2016-07-28T04:36:50.000+0000 Open
2016-07-28T04:36:50.000+0000 Open
2016-07-28T04:36:51.000+0000 Close
2016-07-28T04:36:51.000+0000 Open
2016-07-28T04:36:53.000+0000 Open
2016-07-28T04:36:58.000+0000 Close
2016-07-28T04:37:00.000+0000 Open
2016-07-28T04:37:01.000+0000 Open
2016-07-28T04:37:04.000+0000 Close
2016-07-28T04:37:04.000+0000 Open
2016-07-28T04:37:06.000+0000 Open
2016-07-28T04:37:09.000+0000 Close
2016-07-28T04:37:09.000+0000 Open
2016-07-28T04:37:11.000+0000 Close
2016-07-28T04:37:11.000+0000 Close
2016-07-28T04:37:12.000+0000 Open
2016-07-28T04:37:16.000+0000 Close
2016-07-28T04:37:18.000+0000 Close
2016-07-28T04:37:18.000+0000 Open
2016-07-28T04:37:19.000+0000 Open
2016-07-28T04:37:22.000+0000 Close
2016-07-28T04:37:22.000+0000 Open
2016-07-28T04:37:24.000+0000 Close
2016-07-28T04:37:27.000+0000 Close
2016-07-28T04:37:28.000+0000 Open
2016-07-28T04:37:32.000+0000 Open
2016-07-28T04:37:33.000+0000 Close
2016-07-28T04:37:34.000+0000 Open
2016-07-28T04:37:34.000+0000 Open
2016-07-28T04:37:38.000+0000 Close
2016-07-28T04:37:38.000+0000 Open
2016-07-28T04:37:41.000+0000 Close
2016-07-28T04:37:42.000+0000 Close
2016-07-28T04:37:42.000+0000 Close
2016-07-28T04:37:42.000+0000 Close
2016-07-28T04:37:43.000+0000 Open
2016-07-28T04:37:43.000+0000 Open
2016-07-28T04:37:44.000+0000 Close
2016-07-28T04:37:44.000+0000 Close
2016-07-28T04:37:44.000+0000 Open
2016-07-28T04:37:47.000+0000 Close
2016-07-28T04:37:47.000+0000 Close
2016-07-28T04:37:50.000+0000 Open
2016-07-28T04:37:54.000+0000 Close
2016-07-28T04:37:54.000+0000 Open
2016-07-28T04:37:55.000+0000 Open
2016-07-28T04:37:56.000+0000 Open
2016-07-28T04:37:57.000+0000 Close
2016-07-28T04:37:58.000+0000 Close
2016-07-28T04:38:04.000+0000 Open
2016-07-28T04:38:05.000+0000 Close
2016-07-28T04:38:06.000+0000 Open
2016-07-28T04:38:07.000+0000 Close
2016-07-28T04:38:07.000+0000 Open
2016-07-28T04:38:08.000+0000 Close
2016-07-28T04:38:11.000+0000 Open
2016-07-28T04:38:15.000+0000 Close
2016-07-28T04:38:18.000+0000 Close
2016-07-28T04:38:19.000+0000 Open
2016-07-28T04:38:23.000+0000 Open
2016-07-28T04:38:28.000+0000 Close
2016-07-28T04:38:28.000+0000 Open
2016-07-28T04:38:29.000+0000 Close
2016-07-28T04:38:29.000+0000 Close
2016-07-28T04:38:31.000+0000 Open
2016-07-28T04:38:32.000+0000 Open
2016-07-28T04:38:33.000+0000 Open
2016-07-28T04:38:34.000+0000 Open
2016-07-28T04:38:37.000+0000 Open
2016-07-28T04:38:43.000+0000 Open
2016-07-28T04:38:47.000+0000 Close
2016-07-28T04:38:50.000+0000 Close
2016-07-28T04:38:53.000+0000 Close
2016-07-28T04:38:53.000+0000 Open
2016-07-28T04:38:54.000+0000 Close
2016-07-28T04:38:56.000+0000 Close
2016-07-28T04:38:57.000+0000 Close
2016-07-28T04:38:59.000+0000 Open
2016-07-28T04:39:01.000+0000 Close
2016-07-28T04:39:02.000+0000 Close
2016-07-28T04:39:02.000+0000 Close
2016-07-28T04:39:03.000+0000 Open
2016-07-28T04:39:05.000+0000 Close
2016-07-28T04:39:09.000+0000 Open
2016-07-28T04:39:10.000+0000 Open
2016-07-28T04:39:11.000+0000 Close
2016-07-28T04:39:12.000+0000 Close
2016-07-28T04:39:12.000+0000 Open
2016-07-28T04:39:14.000+0000 Close
2016-07-28T04:39:15.000+0000 Close
2016-07-28T04:39:15.000+0000 Close
2016-07-28T04:39:17.000+0000 Open
2016-07-28T04:39:19.000+0000 Close
2016-07-28T04:39:19.000+0000 Close
2016-07-28T04:39:21.000+0000 Close
2016-07-28T04:39:25.000+0000 Close
2016-07-28T04:39:25.000+0000 Open
2016-07-28T04:39:26.000+0000 Close
2016-07-28T04:39:26.000+0000 Close
2016-07-28T04:39:27.000+0000 Close
2016-07-28T04:39:28.000+0000 Open
2016-07-28T04:39:28.000+0000 Open
2016-07-28T04:39:28.000+0000 Open
2016-07-28T04:39:31.000+0000 Close
2016-07-28T04:39:33.000+0000 Open
2016-07-28T04:39:33.000+0000 Open
2016-07-28T04:39:38.000+0000 Open
2016-07-28T04:39:38.000+0000 Open
2016-07-28T04:39:41.000+0000 Open
2016-07-28T04:39:41.000+0000 Open
2016-07-28T04:39:46.000+0000 Close
2016-07-28T04:39:48.000+0000 Close
2016-07-28T04:39:51.000+0000 Close
2016-07-28T04:39:54.000+0000 Open
2016-07-28T04:39:58.000+0000 Close
2016-07-28T04:40:03.000+0000 Open
2016-07-28T04:40:07.000+0000 Close
2016-07-28T04:40:07.000+0000 Close
2016-07-28T04:40:07.000+0000 Close
2016-07-28T04:40:09.000+0000 Close
2016-07-28T04:40:09.000+0000 Open
2016-07-28T04:40:11.000+0000 Close
2016-07-28T04:40:11.000+0000 Close
2016-07-28T04:40:12.000+0000 Open
2016-07-28T04:40:12.000+0000 Open
2016-07-28T04:40:13.000+0000 Open
2016-07-28T04:40:19.000+0000 Close
2016-07-28T04:40:22.000+0000 Open
2016-07-28T04:40:23.000+0000 Close
2016-07-28T04:40:24.000+0000 Open
2016-07-28T04:40:25.000+0000 Open
2016-07-28T04:40:28.000+0000 Close
2016-07-28T04:40:30.000+0000 Open
2016-07-28T04:40:36.000+0000 Close
2016-07-28T04:40:43.000+0000 Close
2016-07-28T04:40:44.000+0000 Open
2016-07-28T04:40:46.000+0000 Close
2016-07-28T04:40:46.000+0000 Close
2016-07-28T04:40:48.000+0000 Close
2016-07-28T04:40:50.000+0000 Open
2016-07-28T04:40:51.000+0000 Open
2016-07-28T04:40:51.000+0000 Open
2016-07-28T04:40:52.000+0000 Close
2016-07-28T04:40:54.000+0000 Open
2016-07-28T04:40:55.000+0000 Close
2016-07-28T04:40:57.000+0000 Close
2016-07-28T04:40:57.000+0000 Close
2016-07-28T04:40:59.000+0000 Open
2016-07-28T04:41:01.000+0000 Open
2016-07-28T04:41:07.000+0000 Open
2016-07-28T04:41:11.000+0000 Close
2016-07-28T04:41:11.000+0000 Open
2016-07-28T04:41:12.000+0000 Open
2016-07-28T04:41:14.000+0000 Close
2016-07-28T04:41:15.000+0000 Open
2016-07-28T04:41:17.000+0000 Close
2016-07-28T04:41:17.000+0000 Close
2016-07-28T04:41:18.000+0000 Open
2016-07-28T04:41:19.000+0000 Open
2016-07-28T04:41:20.000+0000 Close
2016-07-28T04:41:21.000+0000 Close
2016-07-28T04:41:24.000+0000 Close
2016-07-28T04:41:25.000+0000 Open
2016-07-28T04:41:26.000+0000 Close
2016-07-28T04:41:26.000+0000 Open
2016-07-28T04:41:29.000+0000 Close
2016-07-28T04:41:29.000+0000 Open
2016-07-28T04:41:35.000+0000 Close
2016-07-28T04:41:36.000+0000 Close
2016-07-28T04:41:37.000+0000 Close
2016-07-28T04:41:38.000+0000 Open
2016-07-28T04:41:40.000+0000 Close
2016-07-28T04:41:40.000+0000 Close
2016-07-28T04:41:41.000+0000 Open
2016-07-28T04:41:43.000+0000 Close
2016-07-28T04:41:44.000+0000 Open
2016-07-28T04:41:48.000+0000 Open
2016-07-28T04:41:50.000+0000 Open
2016-07-28T04:41:50.000+0000 Open
2016-07-28T04:41:51.000+0000 Open
2016-07-28T04:41:52.000+0000 Close
2016-07-28T04:41:52.000+0000 Open
2016-07-28T04:41:54.000+0000 Open
2016-07-28T04:42:01.000+0000 Close
2016-07-28T04:42:05.000+0000 Close
2016-07-28T04:42:07.000+0000 Open
2016-07-28T04:42:10.000+0000 Close
2016-07-28T04:42:11.000+0000 Open
2016-07-28T04:42:11.000+0000 Open
2016-07-28T04:42:19.000+0000 Open
2016-07-28T04:42:21.000+0000 Open
2016-07-28T04:42:23.000+0000 Close
2016-07-28T04:42:23.000+0000 Close
2016-07-28T04:42:23.000+0000 Close
2016-07-28T04:42:23.000+0000 Open
2016-07-28T04:42:25.000+0000 Close
2016-07-28T04:42:27.000+0000 Close
2016-07-28T04:42:30.000+0000 Open
2016-07-28T04:42:31.000+0000 Open
2016-07-28T04:42:33.000+0000 Close
2016-07-28T04:42:34.000+0000 Close
2016-07-28T04:42:35.000+0000 Close
2016-07-28T04:42:38.000+0000 Close
2016-07-28T04:42:38.000+0000 Open
2016-07-28T04:42:44.000+0000 Close
2016-07-28T04:42:44.000+0000 Close
2016-07-28T04:42:44.000+0000 Open
2016-07-28T04:42:44.000+0000 Open
2016-07-28T04:42:48.000+0000 Close
2016-07-28T04:42:49.000+0000 Open
2016-07-28T04:42:49.000+0000 Open
2016-07-28T04:42:49.000+0000 Open
2016-07-28T04:42:50.000+0000 Open
2016-07-28T04:42:52.000+0000 Open
2016-07-28T04:42:53.000+0000 Open
2016-07-28T04:42:54.000+0000 Close
2016-07-28T04:42:55.000+0000 Open
2016-07-28T04:42:56.000+0000 Close
2016-07-28T04:42:57.000+0000 Close
2016-07-28T04:43:03.000+0000 Open
2016-07-28T04:43:05.000+0000 Close
2016-07-28T04:43:07.000+0000 Open
2016-07-28T04:43:09.000+0000 Close
2016-07-28T04:43:11.000+0000 Close
2016-07-28T04:43:12.000+0000 Open
2016-07-28T04:43:14.000+0000 Open
2016-07-28T04:43:15.000+0000 Close
2016-07-28T04:43:15.000+0000 Open
2016-07-28T04:43:18.000+0000 Close
2016-07-28T04:43:21.000+0000 Open
2016-07-28T04:43:25.000+0000 Close
2016-07-28T04:43:26.000+0000 Open
2016-07-28T04:43:31.000+0000 Close
2016-07-28T04:43:33.000+0000 Open
2016-07-28T04:43:38.000+0000 Open
2016-07-28T04:43:39.000+0000 Open
2016-07-28T04:43:41.000+0000 Close
2016-07-28T04:43:43.000+0000 Open
2016-07-28T04:43:43.000+0000 Open
2016-07-28T04:43:45.000+0000 Open
2016-07-28T04:43:46.000+0000 Close
2016-07-28T04:43:46.000+0000 Open
2016-07-28T04:43:47.000+0000 Close
2016-07-28T04:43:49.000+0000 Open
2016-07-28T04:43:50.000+0000 Open
2016-07-28T04:43:52.000+0000 Open
2016-07-28T04:43:59.000+0000 Open
2016-07-28T04:44:00.000+0000 Close
2016-07-28T04:44:05.000+0000 Close
2016-07-28T04:44:06.000+0000 Close
2016-07-28T04:44:12.000+0000 Close
2016-07-28T04:44:14.000+0000 Open
2016-07-28T04:44:15.000+0000 Close
2016-07-28T04:44:16.000+0000 Open
2016-07-28T04:44:19.000+0000 Close
2016-07-28T04:44:20.000+0000 Open
2016-07-28T04:44:22.000+0000 Open
2016-07-28T04:44:23.000+0000 Open
2016-07-28T04:44:30.000+0000 Open
2016-07-28T04:44:30.000+0000 Open
2016-07-28T04:44:31.000+0000 Close
2016-07-28T04:44:32.000+0000 Close
2016-07-28T04:44:34.000+0000 Open
2016-07-28T04:44:36.000+0000 Close
2016-07-28T04:44:36.000+0000 Open
2016-07-28T04:44:38.000+0000 Open
2016-07-28T04:44:42.000+0000 Close
2016-07-28T04:44:47.000+0000 Close
2016-07-28T04:44:48.000+0000 Close
2016-07-28T04:44:51.000+0000 Open
2016-07-28T04:44:52.000+0000 Open
2016-07-28T04:44:53.000+0000 Close
2016-07-28T04:44:54.000+0000 Open
2016-07-28T04:44:57.000+0000 Close
2016-07-28T04:45:03.000+0000 Open
2016-07-28T04:45:04.000+0000 Open
2016-07-28T04:45:09.000+0000 Open
2016-07-28T04:45:10.000+0000 Open
2016-07-28T04:45:18.000+0000 Close
2016-07-28T04:45:18.000+0000 Open
2016-07-28T04:45:18.000+0000 Open
2016-07-28T04:45:19.000+0000 Close
2016-07-28T04:45:20.000+0000 Open
2016-07-28T04:45:21.000+0000 Open
2016-07-28T04:45:22.000+0000 Close
2016-07-28T04:45:22.000+0000 Close
2016-07-28T04:45:31.000+0000 Close
2016-07-28T04:45:32.000+0000 Open
2016-07-28T04:45:38.000+0000 Open
2016-07-28T04:45:39.000+0000 Open
2016-07-28T04:45:41.000+0000 Close
2016-07-28T04:45:41.000+0000 Open
2016-07-28T04:45:44.000+0000 Close
2016-07-28T04:45:46.000+0000 Close
2016-07-28T04:45:46.000+0000 Open
2016-07-28T04:45:52.000+0000 Open
2016-07-28T04:46:01.000+0000 Close
2016-07-28T04:46:01.000+0000 Close
2016-07-28T04:46:01.000+0000 Open
2016-07-28T04:46:11.000+0000 Close
2016-07-28T04:46:11.000+0000 Close
2016-07-28T04:46:15.000+0000 Open
2016-07-28T04:46:18.000+0000 Close
2016-07-28T04:46:18.000+0000 Close
2016-07-28T04:46:24.000+0000 Open
2016-07-28T04:46:27.000+0000 Close
2016-07-28T04:46:27.000+0000 Close
2016-07-28T04:46:30.000+0000 Open
2016-07-28T04:46:31.000+0000 Open
2016-07-28T04:46:31.000+0000 Open
2016-07-28T04:46:33.000+0000 Open
2016-07-28T04:46:34.000+0000 Close
2016-07-28T04:46:35.000+0000 Open
2016-07-28T04:46:37.000+0000 Close
2016-07-28T04:46:37.000+0000 Close
2016-07-28T04:46:37.000+0000 Close
2016-07-28T04:46:39.000+0000 Close
2016-07-28T04:46:41.000+0000 Open
2016-07-28T04:46:45.000+0000 Close
2016-07-28T04:46:48.000+0000 Close
2016-07-28T04:46:48.000+0000 Open
2016-07-28T04:46:49.000+0000 Close
2016-07-28T04:46:51.000+0000 Close
2016-07-28T04:46:51.000+0000 Open
2016-07-28T04:46:52.000+0000 Open
2016-07-28T04:46:53.000+0000 Open
2016-07-28T04:46:55.000+0000 Open
2016-07-28T04:46:56.000+0000 Close
2016-07-28T04:46:57.000+0000 Close
2016-07-28T04:47:03.000+0000 Close
2016-07-28T04:47:06.000+0000 Open
2016-07-28T04:47:09.000+0000 Open
2016-07-28T04:47:09.000+0000 Open
2016-07-28T04:47:09.000+0000 Open
2016-07-28T04:47:11.000+0000 Close
2016-07-28T04:47:11.000+0000 Close
2016-07-28T04:47:11.000+0000 Close
2016-07-28T04:47:22.000+0000 Close
2016-07-28T04:47:24.000+0000 Close
2016-07-28T04:47:26.000+0000 Close
2016-07-28T04:47:26.000+0000 Open
2016-07-28T04:47:32.000+0000 Close
2016-07-28T04:47:38.000+0000 Open
2016-07-28T04:47:40.000+0000 Open
2016-07-28T04:47:42.000+0000 Close
2016-07-28T04:47:46.000+0000 Close
2016-07-28T04:47:47.000+0000 Open
2016-07-28T04:47:50.000+0000 Close
2016-07-28T04:47:51.000+0000 Open
2016-07-28T04:47:55.000+0000 Close
2016-07-28T04:47:56.000+0000 Open
2016-07-28T04:47:56.000+0000 Open
2016-07-28T04:47:58.000+0000 Open
2016-07-28T04:48:02.000+0000 Close
2016-07-28T04:48:02.000+0000 Close
2016-07-28T04:48:05.000+0000 Close
2016-07-28T04:48:06.000+0000 Close
2016-07-28T04:48:09.000+0000 Close
2016-07-28T04:48:09.000+0000 Open
2016-07-28T04:48:09.000+0000 Open
2016-07-28T04:48:12.000+0000 Open
2016-07-28T04:48:16.000+0000 Close
2016-07-28T04:48:19.000+0000 Close
2016-07-28T04:48:19.000+0000 Open
2016-07-28T04:48:21.000+0000 Open
2016-07-28T04:48:33.000+0000 Close
2016-07-28T04:48:35.000+0000 Close
2016-07-28T04:48:35.000+0000 Open
2016-07-28T04:48:35.000+0000 Open
2016-07-28T04:48:37.000+0000 Open
2016-07-28T04:48:38.000+0000 Close
2016-07-28T04:48:42.000+0000 Close
2016-07-28T04:48:44.000+0000 Open
2016-07-28T04:48:45.000+0000 Open
2016-07-28T04:48:49.000+0000 Close
2016-07-28T04:48:53.000+0000 Close
2016-07-28T04:48:56.000+0000 Close
2016-07-28T04:48:56.000+0000 Close
2016-07-28T04:48:59.000+0000 Close
2016-07-28T04:49:00.000+0000 Open
2016-07-28T04:49:01.000+0000 Open
2016-07-28T04:49:05.000+0000 Close
2016-07-28T04:49:08.000+0000 Open
2016-07-28T04:49:12.000+0000 Close
2016-07-28T04:49:12.000+0000 Close
2016-07-28T04:49:16.000+0000 Close
2016-07-28T04:49:16.000+0000 Close
2016-07-28T04:49:22.000+0000 Open
2016-07-28T04:49:24.000+0000 Close
2016-07-28T04:49:25.000+0000 Open
2016-07-28T04:49:26.000+0000 Close
2016-07-28T04:49:30.000+0000 Open
2016-07-28T04:49:30.000+0000 Open
2016-07-28T04:49:36.000+0000 Close
2016-07-28T04:49:36.000+0000 Open
2016-07-28T04:49:41.000+0000 Close
2016-07-28T04:49:41.000+0000 Open
2016-07-28T04:49:44.000+0000 Close
2016-07-28T04:49:44.000+0000 Close
2016-07-28T04:49:51.000+0000 Close
2016-07-28T04:49:52.000+0000 Close
2016-07-28T04:49:57.000+0000 Close
2016-07-28T04:50:02.000+0000 Open
2016-07-28T04:50:13.000+0000 Open
2016-07-28T04:50:16.000+0000 Open
2016-07-28T04:50:18.000+0000 Open
2016-07-28T04:50:21.000+0000 Close
2016-07-28T04:50:23.000+0000 Close
2016-07-28T04:50:23.000+0000 Close
2016-07-28T04:50:23.000+0000 Close
2016-07-28T04:50:24.000+0000 Open
2016-07-28T04:50:25.000+0000 Close
2016-07-28T04:50:28.000+0000 Close
2016-07-28T04:50:30.000+0000 Close
2016-07-28T04:50:34.000+0000 Close
2016-07-28T04:50:34.000+0000 Close
2016-07-28T04:50:34.000+0000 Open
2016-07-28T04:50:39.000+0000 Close
2016-07-28T04:50:39.000+0000 Close
2016-07-28T04:50:47.000+0000 Close
2016-07-28T04:50:47.000+0000 Close
2016-07-28T04:50:49.000+0000 Close
2016-07-28T04:50:52.000+0000 Open
2016-07-28T04:50:52.000+0000 Open
2016-07-28T04:51:03.000+0000 Open
2016-07-28T04:51:09.000+0000 Close
2016-07-28T04:51:09.000+0000 Close
2016-07-28T04:51:11.000+0000 Close
2016-07-28T04:51:13.000+0000 Open
2016-07-28T04:51:16.000+0000 Close
2016-07-28T04:51:19.000+0000 Close
2016-07-28T04:51:22.000+0000 Close
2016-07-28T04:51:26.000+0000 Close
2016-07-28T04:51:29.000+0000 Close
2016-07-28T04:51:29.000+0000 Close
2016-07-28T04:51:34.000+0000 Open
2016-07-28T04:51:39.000+0000 Close
2016-07-28T04:51:39.000+0000 Close
2016-07-28T04:51:41.000+0000 Close
2016-07-28T04:51:41.000+0000 Open
2016-07-28T04:51:42.000+0000 Open
2016-07-28T04:51:44.000+0000 Close
2016-07-28T04:51:50.000+0000 Close
2016-07-28T04:51:54.000+0000 Close
2016-07-28T04:51:54.000+0000 Open
2016-07-28T04:51:57.000+0000 Close

Now we can compute the number of "open" and "close" actions with one hour windows. To do this, we will group by the action column and 1 hour windows over the time column.

import org.apache.spark.sql.functions._

val staticCountsDF = 
  staticInputDF
    .groupBy($"action", window($"time", "1 hour"))
    .count()   

// Register the DataFrame as table 'static_counts'
staticCountsDF.createOrReplaceTempView("static_counts")
import org.apache.spark.sql.functions._
staticCountsDF: org.apache.spark.sql.DataFrame = [action: string, window: struct<start: timestamp, end: timestamp> ... 1 more field]

Now we can directly use SQL to query the table. For example, here are the total counts across all the hours.

select action, sum(count) as total_count from static_counts group by action
action total_count
Open 50000.0
Close 50000.0

How about a timeline of windowed counts?

select action, date_format(window.end, "MMM-dd HH:mm") as time, count from static_counts order by time, action
action time count
Close Jul-26 03:00 11.0
Open Jul-26 03:00 179.0
Close Jul-26 04:00 344.0
Open Jul-26 04:00 1001.0
Close Jul-26 05:00 815.0
Open Jul-26 05:00 999.0
Close Jul-26 06:00 1003.0
Open Jul-26 06:00 1000.0
Close Jul-26 07:00 1011.0
Open Jul-26 07:00 993.0
Close Jul-26 08:00 989.0
Open Jul-26 08:00 1008.0
Close Jul-26 09:00 985.0
Open Jul-26 09:00 996.0
Close Jul-26 10:00 983.0
Open Jul-26 10:00 1000.0
Close Jul-26 11:00 1022.0
Open Jul-26 11:00 1007.0
Close Jul-26 12:00 1028.0
Open Jul-26 12:00 991.0
Close Jul-26 13:00 960.0
Open Jul-26 13:00 996.0
Close Jul-26 14:00 1028.0
Open Jul-26 14:00 1006.0
Close Jul-26 15:00 994.0
Open Jul-26 15:00 991.0
Close Jul-26 16:00 988.0
Open Jul-26 16:00 1020.0
Close Jul-26 17:00 984.0
Open Jul-26 17:00 992.0
Close Jul-26 18:00 1036.0
Open Jul-26 18:00 990.0
Close Jul-26 19:00 1001.0
Open Jul-26 19:00 1004.0
Close Jul-26 20:00 967.0
Open Jul-26 20:00 998.0
Close Jul-26 21:00 1035.0
Open Jul-26 21:00 1010.0
Close Jul-26 22:00 995.0
Open Jul-26 22:00 998.0
Close Jul-26 23:00 1036.0
Open Jul-26 23:00 997.0
Close Jul-27 00:00 950.0
Open Jul-27 00:00 1000.0
Close Jul-27 01:00 1008.0
Open Jul-27 01:00 998.0
Close Jul-27 02:00 1013.0
Open Jul-27 02:00 1004.0
Close Jul-27 03:00 971.0
Open Jul-27 03:00 992.0
Close Jul-27 04:00 1025.0
Open Jul-27 04:00 1014.0
Close Jul-27 05:00 989.0
Open Jul-27 05:00 995.0
Close Jul-27 06:00 987.0
Open Jul-27 06:00 986.0
Close Jul-27 07:00 1026.0
Open Jul-27 07:00 1016.0
Close Jul-27 08:00 982.0
Open Jul-27 08:00 998.0
Close Jul-27 09:00 1024.0
Open Jul-27 09:00 1002.0
Close Jul-27 10:00 990.0
Open Jul-27 10:00 992.0
Close Jul-27 11:00 1001.0
Open Jul-27 11:00 1006.0
Close Jul-27 12:00 1006.0
Open Jul-27 12:00 998.0
Close Jul-27 13:00 1035.0
Open Jul-27 13:00 994.0
Close Jul-27 14:00 986.0
Open Jul-27 14:00 1008.0
Close Jul-27 15:00 948.0
Open Jul-27 15:00 984.0
Close Jul-27 16:00 1018.0
Open Jul-27 16:00 1017.0
Close Jul-27 17:00 970.0
Open Jul-27 17:00 992.0
Close Jul-27 18:00 1020.0
Open Jul-27 18:00 1007.0
Close Jul-27 19:00 1036.0
Open Jul-27 19:00 995.0
Close Jul-27 20:00 969.0
Open Jul-27 20:00 1007.0
Close Jul-27 21:00 1025.0
Open Jul-27 21:00 1005.0
Close Jul-27 22:00 979.0
Open Jul-27 22:00 998.0
Close Jul-27 23:00 996.0
Open Jul-27 23:00 986.0
Close Jul-28 00:00 1011.0
Open Jul-28 00:00 1008.0
Close Jul-28 01:00 988.0
Open Jul-28 01:00 1000.0
Close Jul-28 02:00 1010.0
Open Jul-28 02:00 1001.0
Close Jul-28 03:00 1007.0
Open Jul-28 03:00 1000.0
Close Jul-28 04:00 993.0
Open Jul-28 04:00 996.0
Close Jul-28 05:00 960.0
Open Jul-28 05:00 825.0
Close Jul-28 06:00 671.0
Close Jul-28 07:00 191.0

Note the two ends of the graph. The close actions are generated such that they are after the corresponding open actions, so there are more "opens" in the beginning and more "closes" in the end.

Stream Processing

Now that we have analyzed the data interactively, let's convert this to a streaming query that continuously updates as data comes. Since we just have a static set of files, we are going to emulate a stream from them by reading one file at a time, in the chronological order they were created. The query we have to write is pretty much the same as the interactive query above.

import org.apache.spark.sql.functions._

// Similar to definition of staticInputDF above, just using `readStream` instead of `read`
val streamingInputDF = 
  spark
    .readStream                       // `readStream` instead of `read` for creating streaming DataFrame
    .schema(jsonSchema)               // Set the schema of the JSON data
    .option("maxFilesPerTrigger", 1)  // Treat a sequence of files as a stream by picking one file at a time
    .json(inputPath)

// Same query as staticInputDF
val streamingCountsDF = 
  streamingInputDF
    .groupBy($"action", window($"time", "1 hour"))
    .count()

// Is this DF actually a streaming DF?
streamingCountsDF.isStreaming
import org.apache.spark.sql.functions._
streamingInputDF: org.apache.spark.sql.DataFrame = [time: timestamp, action: string]
streamingCountsDF: org.apache.spark.sql.DataFrame = [action: string, window: struct<start: timestamp, end: timestamp> ... 1 more field]
res9: Boolean = true

As you can see, streamingCountsDF is a streaming Dataframe (streamingCountsDF.isStreaming was true). You can start streaming computation, by defining the sink and starting it. In our case, we want to interactively query the counts (same queries as above), so we will set the complete set of 1 hour counts to be a in a in-memory table (note that this for testing purpose only in Spark 2.0).

spark.conf.set("spark.sql.shuffle.partitions", "1")  // keep the size of shuffles small

val query =
  streamingCountsDF
    .writeStream
    .format("memory")        // memory = store in-memory table (for testing only in Spark 2.0)
    .queryName("counts")     // counts = name of the in-memory table
    .outputMode("complete")  // complete = all the counts should be in the table
    .start()
query: org.apache.spark.sql.streaming.StreamingQuery = org.apache.spark.sql.execution.streaming.StreamingQueryWrapper@34d14ed9

query is a handle to the streaming query that is running in the background. This query is continuously picking up files and updating the windowed counts.

Note the status of query in the above cell. Both the Status: ACTIVE and the progress bar shows that the query is active. Furthermore, if you expand the >Details above, you will find the number of files they have already processed.

Let's wait a bit for a few files to be processed and then query the in-memory counts table.

Thread.sleep(5000) // wait a bit for computation to start
select action, date_format(window.end, "MMM-dd HH:mm") as time, count from counts order by time, action
action time count
Close Jul-26 03:00 11.0
Open Jul-26 03:00 179.0
Close Jul-26 04:00 344.0
Open Jul-26 04:00 1001.0
Close Jul-26 05:00 815.0
Open Jul-26 05:00 999.0
Close Jul-26 06:00 1003.0
Open Jul-26 06:00 1000.0
Close Jul-26 07:00 1011.0
Open Jul-26 07:00 993.0
Close Jul-26 08:00 989.0
Open Jul-26 08:00 1008.0
Close Jul-26 09:00 985.0
Open Jul-26 09:00 996.0
Close Jul-26 10:00 983.0
Open Jul-26 10:00 1000.0
Close Jul-26 11:00 1022.0
Open Jul-26 11:00 1007.0
Close Jul-26 12:00 1028.0
Open Jul-26 12:00 991.0
Close Jul-26 13:00 960.0
Open Jul-26 13:00 996.0
Close Jul-26 14:00 1028.0
Open Jul-26 14:00 1006.0
Close Jul-26 15:00 994.0
Open Jul-26 15:00 991.0
Close Jul-26 16:00 988.0
Open Jul-26 16:00 1020.0
Close Jul-26 17:00 984.0
Open Jul-26 17:00 992.0
Close Jul-26 18:00 1036.0
Open Jul-26 18:00 990.0
Close Jul-26 19:00 1001.0
Open Jul-26 19:00 1004.0
Close Jul-26 20:00 967.0
Open Jul-26 20:00 998.0
Close Jul-26 21:00 1035.0
Open Jul-26 21:00 1010.0
Close Jul-26 22:00 995.0
Open Jul-26 22:00 998.0
Close Jul-26 23:00 1036.0
Open Jul-26 23:00 997.0
Close Jul-27 00:00 950.0
Open Jul-27 00:00 1000.0
Close Jul-27 01:00 1008.0
Open Jul-27 01:00 998.0
Close Jul-27 02:00 1013.0
Open Jul-27 02:00 1004.0
Close Jul-27 03:00 971.0
Open Jul-27 03:00 992.0
Close Jul-27 04:00 1025.0
Open Jul-27 04:00 1014.0
Close Jul-27 05:00 989.0
Open Jul-27 05:00 995.0
Close Jul-27 06:00 987.0
Open Jul-27 06:00 986.0
Close Jul-27 07:00 1026.0
Open Jul-27 07:00 1016.0
Close Jul-27 08:00 982.0
Open Jul-27 08:00 998.0
Close Jul-27 09:00 1024.0
Open Jul-27 09:00 1002.0
Close Jul-27 10:00 990.0
Open Jul-27 10:00 992.0
Close Jul-27 11:00 1001.0
Open Jul-27 11:00 1006.0
Close Jul-27 12:00 1006.0
Open Jul-27 12:00 998.0
Close Jul-27 13:00 1035.0
Open Jul-27 13:00 994.0
Close Jul-27 14:00 986.0
Open Jul-27 14:00 1008.0
Close Jul-27 15:00 948.0
Open Jul-27 15:00 984.0
Close Jul-27 16:00 1018.0
Open Jul-27 16:00 1017.0
Close Jul-27 17:00 970.0
Open Jul-27 17:00 992.0
Close Jul-27 18:00 1020.0
Open Jul-27 18:00 1007.0
Close Jul-27 19:00 1036.0
Open Jul-27 19:00 995.0
Close Jul-27 20:00 969.0
Open Jul-27 20:00 1007.0
Close Jul-27 21:00 1025.0
Open Jul-27 21:00 1005.0
Close Jul-27 22:00 979.0
Open Jul-27 22:00 998.0
Close Jul-27 23:00 326.0
Open Jul-27 23:00 317.0

We see the timeline of windowed counts (similar to the static one ealrier) building up. If we keep running this interactive query repeatedly, we will see the latest updated counts which the streaming query is updating in the background.

Thread.sleep(5000)  // wait a bit more for more data to be computed
select action, date_format(window.end, "MMM-dd HH:mm") as time, count from counts order by time, action
action time count
Close Jul-26 03:00 11.0
Open Jul-26 03:00 179.0
Close Jul-26 04:00 344.0
Open Jul-26 04:00 1001.0
Close Jul-26 05:00 815.0
Open Jul-26 05:00 999.0
Close Jul-26 06:00 1003.0
Open Jul-26 06:00 1000.0
Close Jul-26 07:00 1011.0
Open Jul-26 07:00 993.0
Close Jul-26 08:00 989.0
Open Jul-26 08:00 1008.0
Close Jul-26 09:00 985.0
Open Jul-26 09:00 996.0
Close Jul-26 10:00 983.0
Open Jul-26 10:00 1000.0
Close Jul-26 11:00 1022.0
Open Jul-26 11:00 1007.0
Close Jul-26 12:00 1028.0
Open Jul-26 12:00 991.0
Close Jul-26 13:00 960.0
Open Jul-26 13:00 996.0
Close Jul-26 14:00 1028.0
Open Jul-26 14:00 1006.0
Close Jul-26 15:00 994.0
Open Jul-26 15:00 991.0
Close Jul-26 16:00 988.0
Open Jul-26 16:00 1020.0
Close Jul-26 17:00 984.0
Open Jul-26 17:00 992.0
Close Jul-26 18:00 1036.0
Open Jul-26 18:00 990.0
Close Jul-26 19:00 1001.0
Open Jul-26 19:00 1004.0
Close Jul-26 20:00 967.0
Open Jul-26 20:00 998.0
Close Jul-26 21:00 1035.0
Open Jul-26 21:00 1010.0
Close Jul-26 22:00 995.0
Open Jul-26 22:00 998.0
Close Jul-26 23:00 1036.0
Open Jul-26 23:00 997.0
Close Jul-27 00:00 950.0
Open Jul-27 00:00 1000.0
Close Jul-27 01:00 1008.0
Open Jul-27 01:00 998.0
Close Jul-27 02:00 1013.0
Open Jul-27 02:00 1004.0
Close Jul-27 03:00 971.0
Open Jul-27 03:00 992.0
Close Jul-27 04:00 1025.0
Open Jul-27 04:00 1014.0
Close Jul-27 05:00 989.0
Open Jul-27 05:00 995.0
Close Jul-27 06:00 987.0
Open Jul-27 06:00 986.0
Close Jul-27 07:00 1026.0
Open Jul-27 07:00 1016.0
Close Jul-27 08:00 982.0
Open Jul-27 08:00 998.0
Close Jul-27 09:00 1024.0
Open Jul-27 09:00 1002.0
Close Jul-27 10:00 990.0
Open Jul-27 10:00 992.0
Close Jul-27 11:00 1001.0
Open Jul-27 11:00 1006.0
Close Jul-27 12:00 1006.0
Open Jul-27 12:00 998.0
Close Jul-27 13:00 1035.0
Open Jul-27 13:00 994.0
Close Jul-27 14:00 986.0
Open Jul-27 14:00 1008.0
Close Jul-27 15:00 948.0
Open Jul-27 15:00 984.0
Close Jul-27 16:00 1018.0
Open Jul-27 16:00 1017.0
Close Jul-27 17:00 970.0
Open Jul-27 17:00 992.0
Close Jul-27 18:00 1020.0
Open Jul-27 18:00 1007.0
Close Jul-27 19:00 1036.0
Open Jul-27 19:00 995.0
Close Jul-27 20:00 969.0
Open Jul-27 20:00 1007.0
Close Jul-27 21:00 1025.0
Open Jul-27 21:00 1005.0
Close Jul-27 22:00 979.0
Open Jul-27 22:00 998.0
Close Jul-27 23:00 996.0
Open Jul-27 23:00 986.0
Close Jul-28 00:00 1011.0
Open Jul-28 00:00 1008.0
Close Jul-28 01:00 988.0
Open Jul-28 01:00 1000.0
Close Jul-28 02:00 1010.0
Open Jul-28 02:00 1001.0
Close Jul-28 03:00 1007.0
Open Jul-28 03:00 1000.0
Close Jul-28 04:00 993.0
Open Jul-28 04:00 996.0
Close Jul-28 05:00 960.0
Open Jul-28 05:00 825.0
Close Jul-28 06:00 671.0
Close Jul-28 07:00 191.0
Thread.sleep(5000)  // wait a bit more for more data to be computed
select action, date_format(window.end, "MMM-dd HH:mm") as time, count from counts order by time, action
action time count
Close Jul-26 03:00 11.0
Open Jul-26 03:00 179.0
Close Jul-26 04:00 344.0
Open Jul-26 04:00 1001.0
Close Jul-26 05:00 815.0
Open Jul-26 05:00 999.0
Close Jul-26 06:00 1003.0
Open Jul-26 06:00 1000.0
Close Jul-26 07:00 1011.0
Open Jul-26 07:00 993.0
Close Jul-26 08:00 989.0
Open Jul-26 08:00 1008.0
Close Jul-26 09:00 985.0
Open Jul-26 09:00 996.0
Close Jul-26 10:00 983.0
Open Jul-26 10:00 1000.0
Close Jul-26 11:00 1022.0
Open Jul-26 11:00 1007.0
Close Jul-26 12:00 1028.0
Open Jul-26 12:00 991.0
Close Jul-26 13:00 960.0
Open Jul-26 13:00 996.0
Close Jul-26 14:00 1028.0
Open Jul-26 14:00 1006.0
Close Jul-26 15:00 994.0
Open Jul-26 15:00 991.0
Close Jul-26 16:00 988.0
Open Jul-26 16:00 1020.0
Close Jul-26 17:00 984.0
Open Jul-26 17:00 992.0
Close Jul-26 18:00 1036.0
Open Jul-26 18:00 990.0
Close Jul-26 19:00 1001.0
Open Jul-26 19:00 1004.0
Close Jul-26 20:00 967.0
Open Jul-26 20:00 998.0
Close Jul-26 21:00 1035.0
Open Jul-26 21:00 1010.0
Close Jul-26 22:00 995.0
Open Jul-26 22:00 998.0
Close Jul-26 23:00 1036.0
Open Jul-26 23:00 997.0
Close Jul-27 00:00 950.0
Open Jul-27 00:00 1000.0
Close Jul-27 01:00 1008.0
Open Jul-27 01:00 998.0
Close Jul-27 02:00 1013.0
Open Jul-27 02:00 1004.0
Close Jul-27 03:00 971.0
Open Jul-27 03:00 992.0
Close Jul-27 04:00 1025.0
Open Jul-27 04:00 1014.0
Close Jul-27 05:00 989.0
Open Jul-27 05:00 995.0
Close Jul-27 06:00 987.0
Open Jul-27 06:00 986.0
Close Jul-27 07:00 1026.0
Open Jul-27 07:00 1016.0
Close Jul-27 08:00 982.0
Open Jul-27 08:00 998.0
Close Jul-27 09:00 1024.0
Open Jul-27 09:00 1002.0
Close Jul-27 10:00 990.0
Open Jul-27 10:00 992.0
Close Jul-27 11:00 1001.0
Open Jul-27 11:00 1006.0
Close Jul-27 12:00 1006.0
Open Jul-27 12:00 998.0
Close Jul-27 13:00 1035.0
Open Jul-27 13:00 994.0
Close Jul-27 14:00 986.0
Open Jul-27 14:00 1008.0
Close Jul-27 15:00 948.0
Open Jul-27 15:00 984.0
Close Jul-27 16:00 1018.0
Open Jul-27 16:00 1017.0
Close Jul-27 17:00 970.0
Open Jul-27 17:00 992.0
Close Jul-27 18:00 1020.0
Open Jul-27 18:00 1007.0
Close Jul-27 19:00 1036.0
Open Jul-27 19:00 995.0
Close Jul-27 20:00 969.0
Open Jul-27 20:00 1007.0
Close Jul-27 21:00 1025.0
Open Jul-27 21:00 1005.0
Close Jul-27 22:00 979.0
Open Jul-27 22:00 998.0
Close Jul-27 23:00 996.0
Open Jul-27 23:00 986.0
Close Jul-28 00:00 1011.0
Open Jul-28 00:00 1008.0
Close Jul-28 01:00 988.0
Open Jul-28 01:00 1000.0
Close Jul-28 02:00 1010.0
Open Jul-28 02:00 1001.0
Close Jul-28 03:00 1007.0
Open Jul-28 03:00 1000.0
Close Jul-28 04:00 993.0
Open Jul-28 04:00 996.0
Close Jul-28 05:00 960.0
Open Jul-28 05:00 825.0
Close Jul-28 06:00 671.0
Close Jul-28 07:00 191.0

Also, let's see the total number of "opens" and "closes".

select action, sum(count) as total_count from counts group by action order by action
action total_count
Close 50000.0
Open 50000.0

If you keep running the above query repeatedly, you will always find that the number of "opens" is more than the number of "closes", as expected in a data stream where a "close" always appear after corresponding "open". This shows that Structured Streaming ensures prefix integrity. Read the blog posts linked below if you want to know more.

Note that there are only a few files, so consuming all of them there will be no updates to the counts. Rerun the query if you want to interact with the streaming query again.

Finally, you can stop the query running in the background, either by clicking on the 'Cancel' link in the cell of the query, or by executing query.stop(). Either way, when the query is stopped, the status of the corresponding cell above will automatically update to TERMINATED.

ScaDaMaLe Course site and book

// this is a companion notebook that generates a bivariate gaussian mixture file stream

import scala.util.Random
import scala.util.Random._

// make a sample to produce a mixture of two normal RVs with standard deviation 1 but with different location or mean parameters
def myMixtureOf2Normals( normalLocation: Double, abnormalLocation: Double, normalWeight: Double, r: Random) : (String, Double) = {
  val sample = if (r.nextDouble <= normalWeight) {r.nextGaussian+normalLocation } 
               else {r.nextGaussian + abnormalLocation} 
  Thread.sleep(5L) // sleep 5 milliseconds
  val now = (new java.text.SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS")).format(new java.util.Date())
  return (now,sample)
   }
   
 dbutils.fs.rm("/datasets/streamingFiles/",true) // this is to delete the directory before staring a job
 
val r = new Random(12345L)
var a = 0;
// for loop execution to write files to distributed fs
for( a <- 1 to 20){
  val data = sc.parallelize(Vector.fill(100){myMixtureOf2Normals(1.0, 10.0, 0.99, r)}).coalesce(1).toDF.as[(String,Double)]
  val minute = (new java.text.SimpleDateFormat("mm")).format(new java.util.Date())
  val second = (new java.text.SimpleDateFormat("ss")).format(new java.util.Date())
  data.write.mode(SaveMode.Overwrite).csv("/datasets/streamingFiles/" + minute +"_" + second + ".csv")
  Thread.sleep(5000L) // sleep 5 seconds
}
import scala.util.Random
import scala.util.Random._
myMixtureOf2Normals: (normalLocation: Double, abnormalLocation: Double, normalWeight: Double, r: scala.util.Random)(String, Double)
r: scala.util.Random = scala.util.Random@11234c7
a: Int = 0

ScaDaMaLe Course site and book

Sketch Origins

READ: Philippe Flajolet and Nigel Martin (1985), Probabilistic Counting Algorithms for Data Base Applicaitons, http://db.cs.berkeley.edu/cs286/papers/flajoletmartin-jcss1985.pdf.

Apache Sketches

Some general recent talks/blogs on various sketches:

  • https://databricks.com/session_na20/high-performance-analytics-with-probabilistic-data-structures-the-power-of-hyperloglog
  • https://databricks.com/blog/2016/05/19/approximate-algorithms-in-apache-spark-hyperloglog-and-quantiles.html
    • the above has a databricks notebook you can try to self-study

We will next focus on a specific sketch called T-Digest for approximating extreme quantiles: - https://databricks.com/session/one-pass-data-science-in-apache-spark-with-generative-t-digests - https://databricks.com/session/sketching-data-with-t-digest-in-apache-spark

ScaDaMaLe Course site and book

Sketching with T-digest for quantiles

A Toy Anomaly Detector

Fisher noticed the fundamental computational difference between mean, covariance, etc. and median, quantiles, in early 1900s.

The former ones are today called recursively computable statistics. When you take the memory footprint needed to keep these statistics updated then we get into the world of probabilistic datastructures...

The basic idea of sketching is formally conveyed in Chapter 6 of Foundations of data Science.

Let's get a more informal view form the following sources.

Here we focus on a specific sketch called T-Digest for approximating extreme quantiles:

Pointers:

NOTE:

  • Once you could see Ted Dunning's explanation of t-digest here:
    • https://www.youtube.com/watch?v=B0dMc0t7K1g
    • But, unfortunately, since 2020 this video has become a private property with this warning: Private video Sign in if you've been granted access to this video

Let us import the following scala implementation of t-digest:

  • for Spark 3.0.1 use maven coordinates: org.isarnproject:isarn-sketches-spark_2.12:0.5.0-sp3.0

See the library: https://github.com/isarn/isarn-sketches-spark

import org.isarnproject.sketches.java.TDigest
import org.isarnproject.sketches.spark.tdigest._
import scala.util.Random
import scala.util.Random._
import org.isarnproject.sketches.java.TDigest
import org.isarnproject.sketches.spark.tdigest._
import scala.util.Random
import scala.util.Random._
// make a sample to produce a mixture of two normal RVs with standard deviation 1 but with different location or mean parameters
def myMixtureOf2Normals( normalLocation: Double, abnormalLocation: Double, normalWeight: Double, r: Random) : Double = {
  val sample = if (r.nextDouble <= normalWeight) {r.nextGaussian+normalLocation } 
               else {r.nextGaussian + abnormalLocation} 
  return sample
   }
myMixtureOf2Normals: (normalLocation: Double, abnormalLocation: Double, normalWeight: Double, r: scala.util.Random)Double

Here is a quick overview of the simple mixture of two Normal or Gaussian random variables we will be simulating from.

val r = new Random(1L)
println(myMixtureOf2Normals(1.0, 10.0, 0.99, r), myMixtureOf2Normals(1.0, 10.0, 0.99, r))
// should always produce (0.5876430182311466,-0.34037937678788865) when seed = 1L
(0.5876430182311466,-0.34037937678788865)
r: scala.util.Random = scala.util.Random@2ac6c652
val r = new Random(12345L)
val data = sc.parallelize(Vector.fill(10000){myMixtureOf2Normals(1.0, 10.0, 0.99, r)}).toDF.as[Double]
r: scala.util.Random = scala.util.Random@568f8382
data: org.apache.spark.sql.Dataset[Double] = [value: double]
data.show(5)
+--------------------+
|               value|
+--------------------+
|  0.2576188264990721|
|-0.13149698512045327|
|  1.4139063973267458|
|-0.02383387596851...|
|  0.7274784426774964|
+--------------------+
only showing top 5 rows
display(data)
value
0.2576188264990721
-0.13149698512045327
1.4139063973267458
-2.3833875968513496e-2
0.7274784426774964
-1.0658630481235276
0.746959841932221
0.30477096247050206
-6.407620682061621e-2
1.8464307210258604
2.0786529531264355
0.685838993990332
2.3056211153362485
-0.7435548094085835
-0.36946067155650786
1.1178132434092503
1.0672400098827672
2.403799182291664
2.7905949803662926
2.3901047303648846
2.2391322699010967
0.7102559487906945
-0.1875570296359037
2.0036998039560725
2.028162246705019
-1.1084782237141253
2.7320985336302965
1.7953021498619885
1.3332433299615185
1.2842120504662247
2.0013530061962186
1.2596569236824775
2.46479668588018
-0.7015927727061835
-0.510611131534981
0.9403812557496112
2.2306482205877427
-0.29781070820511246
4.107241990001628
0.7420568724108764
1.4652231673746594
0.8793849318247119
1.7671614106752898
1.1995772213743607
1.1351566745099897
0.16150528245701323
2.459849452657596
1.0796739450956971
-1.2079899446434252
0.7019279468450133
-2.5906759976580096e-2
1.025799236502406
2.423754193708396
1.0100073192180106
1.2308412912433588
2.2142939785873326
9.639219241219372
0.8964067897832677
2.583753664296168
1.7326439212827238
0.7516388863094139
0.8725633940449549
-0.9407676766254014
1.0542712925875175
0.794535189312687
0.5813794557982226
0.4891368786472011
2.3296394918008474
1.425296303524094
1.9276679925454094
0.6178050147872097
1.135269636375052
1.3074367248762568
0.6105659268751382
1.7812955395572572
-1.3547368916771827
1.580412775615275
1.5731144914401023
-5.725067553082108e-2
0.19580347035995105
-2.1501122555202867e-2
1.5783579658949254
1.371796305513024
0.648919899258448
-0.7875773550339058
1.3233945353130716
2.5685224032022127
2.7331317575905807
0.2521381731074053
2.2408918489807905
1.4924862197354933
1.194657083531184
0.7067352811215412
2.7701718519244745e-2
0.279797547315617
-0.21953266770586133
1.1402931320647434
0.904724947360263
0.6677145203694429
2.019977647420342
-0.5190278662580565
1.2549405940975034
2.4267606721380233
0.21858105660909444
1.7701229392924476
8.326770280505069e-2
11.539205812425335
0.612370126029857
1.299073306785623
2.6939073650678083
2.5320627406973344
2.781337457744293e-2
0.3272489908510584
-0.9427386544836929
0.9364640268126377
1.919225736153371
0.38826998132506296
-0.38655650387475715
1.0433731216978939
1.1500718903613745
-0.3661280681150447
0.883444064705467
-0.9126173899348853
0.3838114564837034
0.7935189081504388
1.928137393349846
4.7092811957255676e-2
0.4684849965794433
0.6745536358089256
2.100439331925503
1.0053957395581328
1.1651633690031988
1.1620631665685186
0.5686294459758102
5.4695916815372114e-2
0.3673527645506809
1.1825682382920246
2.590900208851957
0.9580677196122074
0.14058634902492095
1.835715236145623
1.0262133311924941
2.3956360313411276
-0.42622276533874537
1.532866051791267
0.33837135147986275
0.5993221970260502
0.5268259369536397
0.9338448405595184
1.5020324977316601
-0.21633343524824378
0.8387080531274844
1.3278878139665884e-2
1.3291762275434373
0.4837833343304839
0.4918446444728072
1.354678573169704
0.2524216007924791
0.5965026762340784
2.000850130836448
2.217169275505519
0.6876140376775531
1.0508210912529563
1.65676102704454
2.155047641017994
1.0866488363653375
1.0691398773308363
0.6120836384011098
0.24914099314834415
2.8691481936548744
0.7633561289177443
1.4483835248568062
2.6108825545691863
1.2751533422561458
1.0131179898567302
0.46308679994249036
0.7793261962344651
1.1671037114122738
2.143874895015684
1.2344250301306705
1.7402355361851662
1.0396911219696297
1.8089030277370215
2.1235708326267533
-0.33938888075466234
1.090463095441436
1.3101016219338661
-0.6251493773996968
1.7223308331307168
1.0299845635585438
1.962846046162154
-1.8537289273720337e-2
0.7977254725466605
-0.21427479370557312
-1.6661289018266037
1.144457447997468
0.6503516296653954
6.581335919503728e-2
1.5478749815243467
1.5497411627601851
0.21791376679544772
1.1291967445604012e-2
-0.30293144696154806
0.4303254534802833
1.5521304466388752
2.2910302464408394
0.4374695472538803
0.4085186427342812
-6.531316403553289e-2
6.39812257122474e-3
0.24840501087934996
-1.021974709142702
-9.233941622902653e-2
0.41027379764960337
1.864567223228712
1.5393474896194466
1.124907339909468
2.0206475875654997
-0.7058862229186389
1.2344926787652002
1.1406194673922239
1.4084552620839659
0.739931161380885
0.29958396894640427
-0.9379262816791101
0.8259556704405835
-0.3199802616466474
1.9656420693625898
0.8789984776053141
2.4965042040211793
1.714778861431627
0.8669641143187272
1.0757413525008879
1.9658378382249264e-2
0.7165095911306543
1.2251547673860115
1.5869187313570912
0.3928727449886338
1.7722759642539445
1.0350331272239843
-1.4234008750858624
0.6054572828043063
0.3024585268617903
2.9432999768948087e-2
0.9382472473173075
2.11287419383702
1.0876022969280528
0.36548993902899596
-2.005053653271253
2.0367928918435894
9.261254419611942e-2
2.156248406806113
-0.5295405173638772
2.452318995994742
0.8636413385915132
0.31460938814139794
-2.0257131370059023e-2
1.3213739526626505
0.9463001869917488
0.986171393681171
0.12492672949874628
0.9908400692267174
1.0695623856543282
1.0221220766637027
2.8492797946693904
1.0609742751901396
1.6409490831011158
1.5427085071446491
1.7312859942989034
1.2947069326850533
0.3756138591369289
1.4349084022701803
0.37649651121290106
0.7071860096564935
1.5065536846394356
0.15009861698305105
3.5084734586888766e-2
1.9474563946729155
9.423175513609095
2.4871634825039015
2.8472676324820685
1.5999488876250578
-0.2693864675719999
1.6304414331783441
0.39324529792831353
0.4053253263569069
0.9270234970247857
1.4509585503273819
0.8878267401905819
1.1883024549090635
1.0163155722641077
-0.8003099498427713
-0.9483216075980454
1.0437451610964232
2.19837214407137
2.070797890483533
1.2067096088561005
0.5043809533024068
0.3683130512293926
1.0968506619209946
-0.6602896123630477
6.829641971377687e-2
1.5578597945995134
0.9822629073468155
-0.7900771590527182
1.1194124344742182
1.1239015052468448
1.9447892371838207
2.0854603958592985
0.17341117815802976
1.5971150699056031
0.35646629992342993
1.8107324499508701
3.463539114641669
0.8683263379823365
1.2642821462325637
1.0099560176390794
1.1930381560126895
0.5433757598192581
1.0213782743479625
1.5049231054950472
0.22101559200796428
1.8743753391414122
0.6050230742039573
0.6939669876285336
1.5379566524515602
-0.6869579758877387
-0.4823865565169676
2.577388594447341
0.9323745950234809
-0.25032440836547454
1.1141701800611599
1.1577408343996396
0.4735089125920344
-1.5559289264558278
-0.11080485473390023
0.1536430200356127
1.2851073161790278
-0.9717966387140513
0.4604981927819666
0.4825924627571432
1.8907687599342153
1.5027092114554406
0.4892227077808574
2.2742380779964306
5.93203161994782e-3
0.9357077683018076
1.6452901327178684
2.5989481778450294
3.1233030636814103
2.14412876458466
0.8645332371791754
1.7396751361758789
3.406726808728102
0.27592904706426413
-0.47288172874607715
3.1581200247451022
2.3502844371874003
2.3604518998272104
2.875582435906723
1.802101533727158
2.158082491464444
-0.5284223682158626
1.929919317533868e-2
1.948485504832782
0.49379467644006303
0.33811694243690293
1.332171769010618
0.6994701270153069
-0.413721820026016
-1.5522089380783108
2.161396170492705
2.333496950423164e-2
-0.10913840839170796
1.1299228472291496
2.4274358384176584
1.9359707345891741
3.487722218477596
0.9990127159196325
-1.0398429191328207
0.3005833334887211
-0.7334628100431295
0.4835865602253189
0.5246945471836175
0.8469783573593253
0.8359162587262456
0.7772016511976113
-0.39849883029666944
1.8703097604547239
2.682932324516024
0.46996888720103236
-7.881388366585762e-2
2.1043645061434084
0.6195230903468327
-0.23170755440676594
0.3918168388047796
0.22086080450987344
1.5182059037248368
1.6442851975073318
0.3979663516003099
2.0531657985840983
1.7928797637680196
2.9329556976986013
1.1087520027663345
1.2115868818351045
1.9163661519192294
1.6917128257752045
1.0095879056962782
-0.13611276130309613
2.2939319088848023
1.0723690693732042
2.1452154961792393
0.7259078662420231
2.6599123456452727
0.2519779820647646
2.1670014817546175
0.10506784220981513
2.018185302480656
1.1665983169452525
0.33284879429952463
0.3531339079979545
2.1004784012229245
1.282680965361929
1.2270715852857979
0.858598096986649
2.5040344133072407
1.6541952933075013
0.5329588210461834
2.1274892552565134
1.4668875035709574
1.5382758818248594
1.7428172106530586
1.4727771685178368
1.6023481462981235
1.6577898477375492
5.892056976555449e-2
2.7754262543475523
1.2200523142327606
1.5903756890326521
-1.49547625208842
0.8523817097750093
0.5057853403549346
0.5683629007876065
1.6479513379049497
1.2148679515188867
0.6222019509815193
1.3255067306263184
0.4983375954130155
-8.802709440091383e-2
0.13831985322805507
-0.5487242466777436
-0.32058114510029334
1.8950590840214767
1.0062190610750874
-0.9934439161367286
0.3671557383587293
0.19986189782147756
-0.49653972053539497
0.6848255848767759
1.5219606199148406
1.455086538348867
2.883109155648917
1.8164694435868296
0.6742710281863775
0.5441958963393487
1.0517397813571259
0.8356831003190489
0.8227690076487093
1.4570119880481842
-0.297581775651637
-7.206180041345078e-2
-0.8739444049086391
2.2604530979343074
2.3872947344763027
3.3685772895980124
2.013534739447639
3.368251328412311
0.8953451648220483
9.545874578601765e-2
0.7718477167244377
1.0629106168204554
0.5518190802821734
2.9939679918505853
1.8726021041818661
0.2653885457840085
1.9872672471653996
-0.553166557898946
1.5640591286122745
2.52680639118602
1.80742439492357
2.1955997975781347
0.5980285235875027
-0.2658797956060317
-0.49719135472382137
1.180607461695498
-0.10430878902480734
0.823892717854915
1.666382974377688
3.748395965408928
1.7921581120532326e-2
0.7499355222636435
2.3386705969751205
0.4946954139770775
2.7985296324540565
1.5356765186629966
-0.33706316690289584
-1.0348117718002703e-2
1.8281804903673144
0.3172404416132534
0.5893972598101371
1.5326653373112777
0.8539589801933155
2.092892033666929
0.6296940385798598
8.99959554980265
1.3401791716063445
0.1625728478606363
1.1023716302273685
1.3244294511594552
1.9717106308848458
2.0935027130112918
0.5540124632067691
2.8390489237342758
1.2584888592967522
0.8378223664277132
-3.705953427792741e-2
-1.125595126556278e-2
1.8829278782190024
2.0192085492627894
1.9386320044059204
1.651530725699678
0.8017983775095935
-3.514905095054344e-2
0.8155367784375459
-0.5552596097915619
1.8705605094803621
10.174199861232976
1.4536718681041976
2.1062700105302996
0.475922933272229
1.0817390501017392
2.2796321965842496
0.23323897979804842
1.0183686887164103
-0.7554454183477253
1.3019096488805835
0.3374802783657267
-0.5146418960525585
-6.048075364364469e-2
-5.967451256405276e-2
1.233114706332717
1.7406224225514757
0.5238094364159517
-4.455639893554597e-2
0.6178805181179535
-0.8510251471391128
1.308052246786863
0.3268079413606899
-1.3025175142774486
0.5320039005955481
0.12387407754502777
2.6101947374495293
1.5224492462759098
1.236244816444764
1.4954016347946228
0.5281503216531209
0.8463329768741992
1.1634091418265289
1.7554480003839843
1.8947204310797423
0.7506627891160538
-0.22423697078143956
0.927732550621745
2.784360564469151
3.542787218532106
0.432227325280209
-0.9685039345572555
1.610702352432787
0.25225428913555614
0.6318605835098334
0.24784246074544614
1.3125827737308564
1.76027763070149
1.604542790844651
0.9391947816971199
2.0113202394281036
0.6833889165055331
0.7828049531781445
1.525206927196617
0.4555978800663196
2.158504891725925
0.5497336163750275
1.108152719515001
2.229598024152815
-0.30715895940514804
-0.4963457195725012
0.6317891104976874
0.9404446092196193
0.5824240745249956
0.9522827784866549
0.5747278460881767
2.3228717276929878
-0.12232857339482872
10.442627838980057
-0.9613909203767632
2.360115785393865
10.460772141286911
-0.15127677602688205
-0.7430747891693794
1.1166522423684635
0.9764807692690883
1.0545546293851562
1.4217770365949636
0.8472372480230154
1.9293695381886482
1.224477650092793
-0.576902390274286
1.327683339856686
2.3294636041468753
1.0508125817809573
0.6893104376174488
0.25055576676244107
0.9140712539798914
1.9321126051060917
0.26607474319516033
1.5039430223830939
2.4155724118245736e-2
2.9529975781223285
2.525931295222416
3.0184656376604537
1.3686271300712956
0.4798723250356388
-1.008710689163618
1.5701794389127253
0.19978829057056857
0.9240761354395062
0.19521466746159932
1.2744043502827374
-0.5431195983133648
0.1974591476352352
0.7430949008871768
0.8988725739330019
1.6597416237841465
1.7417674159020793
2.4088293700031693
1.143581580663996
1.4082585153064164
0.47069873985209143
-1.1580364942293953
1.0732710931019263
1.5509982425379119
1.7677585939234246
-1.1941141421872046
0.21105300062231513
0.6529477393919702
-0.34988928270936226
0.5444265493743319
2.251577733100331
0.16913394323406872
11.260505056159252
0.7864958296725808
1.091593382604825
0.7471169918052949
0.36078054843777674
1.4284182466312578
0.3641673471149427
-1.0360349861808262
0.38230293118496894
1.8786902260733145
-0.7366326071049587
2.0194567072200393
0.5111298609273971
1.947809368661851
2.628034116556221
0.10041127309972464
-0.9914516842653798
0.42604512187698185
0.8384351974667634
0.9068232484596871
0.5738655218292824
1.9264618723701152
1.5761237325879685
1.626292107129501
2.0849065606469557
-0.11343769134229165
0.1025048999056456
1.053052626678512
-0.24007907992011845
0.1382201206040784
1.8910416626489552
2.8345096657368556
2.583170811237836
-0.3893702721041865
1.3520108244803342
-0.12593338576537327
-0.9808107291023687
1.4015364735733753
1.695547239092475
1.3489569585825598
0.8360821611908951
0.33683162328320115
-1.260138655228694
1.6662817407344905
2.273339646058636
2.1408145302279262
1.1532315762716479
0.792000410181182
2.617286787875776
0.1827575511422984
-0.9059473522304451
-0.31032406628601605
1.487891710861818
-0.19990519218780256
1.5228011808161672
1.091160988784999
-0.5446290704848642
0.5171720764854166
1.3467504967289186
0.5028941713981121
1.6926426850928427
2.737520065317047
-0.6337974959768131
1.8334664621031571
1.1752071580242511
1.7663153320810179
-3.287929610234186e-2
0.9776991082096643
0.7572719681136401
2.0252498233787533
2.0549863976236282
0.28876880002288263
0.7795217487076926
0.5157176523746317
1.2531709070474009
2.814127875620211
1.1251332829763616
-0.8742006368438211
3.525681357968375
-0.6125196383989251
-1.028539893085076
1.944129606903108
0.4230304057405415
0.6337892280346704
-0.6826244579330205
0.4137789790932437
2.133348714505998
1.6093325024482905
0.1403062548964249
0.9977662561095119
1.6154705922157153
1.5325103216966767
1.6480988802470689
1.5457471122768636
0.9278397921261037
0.5346075472347933
-0.3366100838774142
1.544130227285287
0.36057447512773
3.1190339889208265
0.5642439359943654
3.6032355486391277e-2
1.217657470850345
1.3708372094771342
1.6204088991501409
1.8068635367597905
0.6836982699156782
2.416660438805482
1.435090083978466
-0.6957004139189842
2.796032650243188
1.263889553950158
-0.24234558620465552
0.4126133154078948
0.48194295180327873
-4.239311593641104e-2
2.131857833197308
-0.15570995063363013
0.7854482141700682
1.8430551242641047
-0.30696087051905785
1.0556256557163515
0.6908168936931991
0.8953437823585769
1.3572533765348747
0.11398684036386209
1.6619698080507603
1.665250946590736
0.8975920127377776
2.2911345789955044
0.9992259266914368
-0.4298332988614957
2.856688086679865
9.905282503779972
0.4160546198968119
1.3074428261508197
0.12330796113292308
-8.208866462136077e-2
0.43041154874660903
-0.26805738332282525
0.24248755197881344
0.4672444655763044
0.7429087035211764
1.4882911024620438
2.444968952436855
-0.8041993598535895
1.208622081250596
8.898983802592564e-2
1.8374497912460765
2.770255925243865
-0.8251005403922271
-0.31615303993603194
-6.680911706266524e-2
-0.5060055961402281
-0.10993772270902569
-1.7858062514118545
0.38124537958478155
3.343016296304274
3.200888498150916
1.6539706630340567
1.0358929561627934
0.7349519148503848
-0.5558335228136968
0.8033111443298455
1.0199590698825889
1.3413895470645456
1.7812455522573842
0.5566880617934874
2.3484161679464695
1.1721286969240714
1.985537497377007
0.5028248990186489
1.126187102927933
1.098065463397298
1.0022339510207767
0.6524600640259166
0.9129989416782233
-4.072646036731209e-2
-0.4997127716196892
3.427010275934218
-1.0592305386912666
0.2158285556706534
0.514825837851193
1.8458680276815647
0.6489366461843191
1.4264428618420764
1.8853463069064655
0.9353141249181827
-0.768007390555169
-0.2825922690641738
-0.11644476731169018
1.5527313352517889
1.361096472043486
1.595550522072551
0.7928138591830576
2.4014413844006057
0.9357181425237647
1.0321537115844546
1.017467185134316
0.7544499732365095
3.2914090464924084
-0.30115720611734975
1.1053894805207805
0.24699783640458162
-0.5754508908317295
-0.7653405793793806
2.551152606418246
2.002335507569091
2.873768745568216
2.4779420958282303
0.9919751607449626
0.26553883660973165
1.5810534545434005
0.4363020620988478
0.4208482016324523
1.801337254210524
0.7282927824660008
2.370482819434382
2.2689908454614955
1.8992484787681332
1.1345675892485614
0.6800326573416028
0.8256537442772758
0.8771603935244864
-1.0434719594542736
0.8516519961407986
2.185092193636852
0.20653003088418287
1.0609626373137258
1.6312654872266645
-0.10993816750885665
2.511242528714717
0.44520543525892353
1.3189873412126392
1.4382931017594118
1.2450171964918337
0.1075552521924854
1.640701165992256
1.5615637324174714
0.8670086589057957
-0.5321458051266028
-3.0490144428005905e-3
1.155437571970963
2.0317104875172864
0.10380445370650004
1.5267461286865376
9.102639076417908
0.11175944316229147
2.605816329847464
-0.31757356268646797
1.938296237870294
2.623630455719459
-0.17930511974208407
0.22653773264797206
0.7738024281145457
0.8303926211791302
3.8144763777667077
0.2892406024466241
0.279787632770216
1.9176201484359812
2.508246019756138
1.4202553258576178
0.8725425712951034
2.590931954493854
-0.6083285430052974
0.3155829148992413
0.9716424879068793
-1.933633148749392e-2
0.416940875088748
0.9075871303206937
-0.18322964916243012
0.49834578143129415
1.1523265845585398
0.16862160117832925
0.22112613360304012
0.8542929631268784
2.1065267442021933
8.716744674485033e-2
1.8741906276937679
-0.8800017423117874
0.985677419800145
1.7814889560835554
2.110674163627958
0.6068631467692929
2.010892494531559
0.182773938051226
0.9818670611842848
1.5106418585391912
0.27457120930224954
3.131668044566185
1.426032519824096
2.756853957923013
-8.337312101971261e-2
-1.2512930084219307
1.994731189314644
1.5010954171753113
1.6160885909571072
0.997576086981748
2.290454085433865
1.5282322334484473
0.9423457650293142
-0.6929312351526271
1.2117261668663954
1.04304142983433
9.191415024846095e-2
-0.1856239446441006
1.6459370551496064
1.5887750906532614
0.5153567867710697
-0.5317451085692393
1.2883518590500194
-0.7231095994748975
0.14806012859804774
-1.1482141379213884
-0.3327423570689081
1.7898218176905512
0.9371146998642512
0.3889248981364597
1.2257589408359904
5.216316163565138e-2
-3.2779385011159734e-2
0.9614097586524999
4.39269909553782
-0.291144125748688
1.2868205085801738
0.1096371776044478
1.0358811391307199
5.805861249302269e-2
0.8985406306915447
0.3622980657724947
7.799859986587844
-0.32465244606923727
-0.11398647870671152
2.507893114732872
1.064845371081416
0.5365179326980457
1.7351202994151347
0.8109120435203592
0.3353686164143178
0.4575130713351897
1.544495758400612

Let's t-digest this data using a user-defined function udf evaluated below.

val udf = TDigestAggregator.udf[Double](compression = 0.2, maxDiscrete = 25)
udf: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedAggregator(org.isarnproject.sketches.spark.tdigest.TDigestAggregator@4ff546b7,class[value[0]: double],None,true,true)

We can agg or aggregate the data DataFrame's value column of Doubles that contain our data as follows.

val agg = data.agg(udf($"value"))
agg: org.apache.spark.sql.DataFrame = [tdigestaggregator(value): tdigest]

Next, let's get the t-digest of the aggregation as td.

val td = agg.first.getAs[TDigest](0) // t-digest
td: org.isarnproject.sketches.java.TDigest = TDigest(-2.795387521721169 -> (1.0, 1.0), -2.5827462010549587 -> (1.0, 2.0), -2.5483614528075127 -> (1.0, 3.0), -2.477169648218326 -> (1.0, 4.0), -2.3989148382735106 -> (1.0, 5.0), -2.3621428788859387 -> (1.0, 6.0), -2.3148374687684097 -> (1.0, 7.0), -2.3118179023740413 -> (1.0, 8.0), -2.287613401306727 -> (1.6985517872440687, 9.69855178724407), -2.2636971919621813 -> (0.30144821275593126, 10.0), -2.0993694077900718 -> (1.0, 11.0), -2.0241191135970116 -> (1.698067286443337, 12.698067286443337), -2.003553685010365 -> (0.5043915481076007, 13.202458834550937), -1.9902316558965267 -> (2.7624647600316936, 15.96492359458263), -1.9799895567671855 -> (0.03507640541736867, 16.0), -1.9544570681791966 -> (1.2983022641753663, 17.298302264175366), -1.949427679786941 -> (0.7016977358246337, 18.0), -1.907719117411431 -> (1.697097354468562, 19.69709735446856), -1.8732175304590568 -> (2.302902645531438, 22.0), -1.7834073726649238 -> (4.0, 26.0), -1.6919174552402663 -> (1.0975606232594757, 27.097560623259476), -1.6703272354126013 -> (2.6632081832738317, 29.760768806533306), -1.6589610792385319 -> (2.643090879929395, 32.4038596864627), -1.623299585167854 -> (3.5068289300613555, 35.910688616524055), -1.6069889994292688 -> (3.028777317662718, 38.93946593418678), -1.5892489007898511 -> (5.4762426700006355, 44.415708604187415) ...)

We can evaluate the t-digest td as a cummulative distribution function or CDF at x via the .cdf(x) method.

td.cdf(1.0)
res28: Double = 0.4995238927606277

We can also get the inverse CDF at any u in the unit interval to get quantiles as follows.

val cutOff = td.cdfInverse(0.99)
cutOff: Double = 8.75686052913737

Let's flag those points that cross the threshold determine dby the cutOff.

val dataFlagged = data.withColumn("anomalous",$"value">cutOff)
dataFlagged: org.apache.spark.sql.DataFrame = [value: double, anomalous: boolean]

Let's show and display the anomalous points.

We are not interested in word-wars over anomalies and outliers here (at the end of the day we are really only interested in the real problem that these arithmetic and syntactic expressions will be used to solve, such as,:

  • keep a washing machine running longer by shutting it down before it will break down (predictive maintenance)
  • keep a network from being attacked by bots/malware/etc by flagging any unusual events worth escalating to the network security opes teams (without annoying them constantly!)
  • etc.
data.withColumn("anomalous",$"value">cutOff).filter("anomalous").show(5)
+------------------+---------+
|             value|anomalous|
+------------------+---------+
| 9.639219241219372|     true|
|11.539205812425335|     true|
| 9.423175513609095|     true|
|  8.99959554980265|     true|
|10.174199861232976|     true|
+------------------+---------+
only showing top 5 rows
display(dataFlagged)

Apply the batch-learnt T-Digest on a new stream of data

First let's simulate historical data for batch-processing.

import scala.util.Random
import scala.util.Random._

// simulate 5 bursts of historical data - emulate batch processing

// make a sample to produce a mixture of two normal RVs with standard deviation 1 but with different location or mean parameters
def myMixtureOf2Normals( normalLocation: Double, abnormalLocation: Double, normalWeight: Double, r: Random) : (String, Double) = {
  val sample = if (r.nextDouble <= normalWeight) {r.nextGaussian+normalLocation } 
               else {r.nextGaussian + abnormalLocation} 
  Thread.sleep(5L) // sleep 5 milliseconds
  val now = (new java.text.SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS")).format(new java.util.Date())
  return (now,sample)
   }
   
 dbutils.fs.rm("/datasets/batchFiles/",true) // this is to delete the directory before staring a job
 
val r = new Random(123454321L)
var a = 0;
// for loop execution to write files to distributed fs
for( a <- 1 to 5){
  val data = sc.parallelize(Vector.fill(100){myMixtureOf2Normals(1.0, 10.0, 0.99, r)}).coalesce(1).toDF.as[(String,Double)]
  val minute = (new java.text.SimpleDateFormat("mm")).format(new java.util.Date())
  val second = (new java.text.SimpleDateFormat("ss")).format(new java.util.Date())
  data.write.mode(SaveMode.Overwrite).csv("/datasets/batchFiles/" + minute +"_" + second + ".csv")
  Thread.sleep(10L) // sleep 10 milliseconds
}
import scala.util.Random
import scala.util.Random._
myMixtureOf2Normals: (normalLocation: Double, abnormalLocation: Double, normalWeight: Double, r: scala.util.Random)(String, Double)
r: scala.util.Random = scala.util.Random@5bdb120b
a: Int = 0
display(dbutils.fs.ls("/datasets/batchFiles/"))
path name size
dbfs:/datasets/batchFiles/09_29.csv/ 09_29.csv/ 0.0
dbfs:/datasets/batchFiles/09_31.csv/ 09_31.csv/ 0.0
dbfs:/datasets/batchFiles/09_32.csv/ 09_32.csv/ 0.0
dbfs:/datasets/batchFiles/09_33.csv/ 09_33.csv/ 0.0
dbfs:/datasets/batchFiles/09_35.csv/ 09_35.csv/ 0.0

Now let's use a static DataFrame to process these files with t-digest and get the 0.99-th quantile based Cut-off.

// Read all the csv files written atomically in a directory
import org.apache.spark.sql.types._

val timedScore = new StructType().add("time", "timestamp").add("score", "Double")

import java.sql.{Date, Timestamp}
case class timedScoreCC(time: Timestamp, score: Double)

//val streamingLines = sc.textFile("/datasets/streamingFiles/*").toDF.as[String]
val staticLinesDS = spark
  .read
  .option("sep", ",")
  .schema(timedScore)      // Specify schema of the csv files
  .csv("/datasets/batchFiles/*").as[timedScoreCC]


val udaf = TDigestAggregator.udf[Double](compression = 0.2, maxDiscrete = 25)


val batchLearntCutOff99 = staticLinesDS
                  .agg(udf($"score").as("td"))
                  .first.getAs[TDigest](0)
                  .cdfInverse(0.99)
import org.apache.spark.sql.types._
timedScore: org.apache.spark.sql.types.StructType = StructType(StructField(time,TimestampType,true), StructField(score,DoubleType,true))
import java.sql.{Date, Timestamp}
defined class timedScoreCC
staticLinesDS: org.apache.spark.sql.Dataset[timedScoreCC] = [time: timestamp, score: double]
udaf: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedAggregator(org.isarnproject.sketches.spark.tdigest.TDigestAggregator@53b818de,class[value[0]: double],None,true,true)
batchLearntCutOff99: Double = 8.65009597030763

We will next execute the companion notebook 040a_TDigestInputStream in order to generate the files with the Gaussian mixture for streaming jobs.

The code in the companion notebook is as follows for convenience (you could just copy-paste this code into another notebook in the same cluster with the same distributed file system):

import scala.util.Random
import scala.util.Random._

// make a sample to produce a mixture of two normal RVs with standard deviation 1 but with different location or mean parameters
def myMixtureOf2Normals( normalLocation: Double, abnormalLocation: Double, normalWeight: Double, r: Random) : (String, Double) = {
  val sample = if (r.nextDouble <= normalWeight) {r.nextGaussian+normalLocation } 
               else {r.nextGaussian + abnormalLocation} 
  Thread.sleep(5L) // sleep 5 milliseconds
  val now = (new java.text.SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS")).format(new java.util.Date())
  return (now,sample)
   }
   
 dbutils.fs.rm("/datasets/streamingFiles/",true) // this is to delete the directory before staring a job
 
val r = new Random(12345L)
var a = 0;
// for loop execution to write files to distributed fs
for( a <- 1 to 20){
  val data = sc.parallelize(Vector.fill(100){myMixtureOf2Normals(1.0, 10.0, 0.99, r)}).coalesce(1).toDF.as[(String,Double)]
  val minute = (new java.text.SimpleDateFormat("mm")).format(new java.util.Date())
  val second = (new java.text.SimpleDateFormat("ss")).format(new java.util.Date())
  data.write.mode(SaveMode.Overwrite).csv("/datasets/streamingFiles/" + minute +"_" + second + ".csv")
  Thread.sleep(5000L) // sleep 5 seconds
}

We will simply apply the batch-learnt t-digest as the threshold for determining if the streaming data is anomalous or not.

import org.apache.spark.sql.types._
import java.sql.{Date, Timestamp}

val timedScore = new StructType().add("time", "timestamp").add("score", "Double")
case class timedScoreCC(time: Timestamp, score: Double)

val streamingLinesDS = spark
  .readStream
  .option("sep", ",")
  .schema(timedScore)      // Specify schema of the csv files
  .csv("/datasets/streamingFiles/*").as[timedScoreCC]
import org.apache.spark.sql.types._
import java.sql.{Date, Timestamp}
timedScore: org.apache.spark.sql.types.StructType = StructType(StructField(time,TimestampType,true), StructField(score,DoubleType,true))
defined class timedScoreCC
streamingLinesDS: org.apache.spark.sql.Dataset[timedScoreCC] = [time: timestamp, score: double]
//display(streamingLinesDS)

Now, we can apply this batch-learnt cut-off from the static DataSet to the streaming DataSet.

This is a simple example of learning in batch mode (say overnight or every few hours) and applying it to live streaming data.

// Start running the query that prints the running counts to the console
val dataFalgged = streamingLinesDS
      .withColumn("anomalous",$"score" > batchLearntCutOff99).filter($"anomalous")
      .writeStream
      //.outputMode("complete")
      .format("console")
      .start()

dataFalgged.awaitTermination() // hit cancel to terminate
-------------------------------------------
Batch: 0
-------------------------------------------
+--------------------+------------------+---------+
|                time|             score|anomalous|
+--------------------+------------------+---------+
|2020-11-16 13:12:...| 9.858438409632281|     true|
|2020-11-16 13:12:...| 10.45683581285141|     true|
|2020-11-16 13:11:...| 9.423175513609095|     true|
|2020-11-16 13:11:...|10.442627838980057|     true|
|2020-11-16 13:11:...|10.460772141286911|     true|
|2020-11-16 13:11:...|11.260505056159252|     true|
|2020-11-16 13:12:...| 9.454926349089147|     true|
|2020-11-16 13:12:...| 10.02254460606071|     true|
|2020-11-16 13:12:...| 9.311690918035534|     true|
|2020-11-16 13:12:...| 9.695132992174205|     true|
|2020-11-16 13:12:...|10.439052640762693|     true|
|2020-11-16 13:11:...|11.539205812425335|     true|
|2020-11-16 13:12:...| 9.102639076417908|     true|
|2020-11-16 13:11:...| 9.905282503779972|     true|
|2020-11-16 13:11:...| 9.639219241219372|     true|
|2020-11-16 13:11:...|  8.99959554980265|     true|
|2020-11-16 13:11:...|10.174199861232976|     true|
|2020-11-16 13:13:...| 9.311726779124077|     true|
|2020-11-16 13:13:...| 8.994959541314255|     true|
|2020-11-16 13:12:...|  9.87803253322451|     true|
+--------------------+------------------+---------+

Although the above pattern of estimating the 99% Cut-Off periodically by batch-processing static DataSets from historical data and then applying these Cut-Offs to filter anamolous data points that are currently streaming at us is good enough for several applications, we may want to do online estimation/learning of the Cut-Off based on the 99% of all the data up to present time and use this live Cut-off to decide which point is anamolous now.

For this we need to use more delicate parts of Structured Streaming.

Streaming T-Digest - Online Updating of the Cut-Off

To impelment a streaming t-digest of the data that keeps the current threshold and a current t-digest, we need to get into more delicate parts of structured streaming and implement our own flatMapgroupsWithState.

Here are some starting points for diving deeper in this direction of arbitrary stateful processing:

Streaming Machine Learning and Structured Streaming

Ultimately we want to use structured streaming for online machine learning algorithms and not just sketching.

Data Engineering Science Pointers

Using kafka, Cassandra and Spark Structured Streaming

ScaDaMaLe Course site and book

TODO: - Make this notebook work for latest version of isarn sketches library with lots of optimisations. - Start by learning from resources embedded in earlier notebooks on T-Digest... - This notebook should still work with Spark 2.4+ and isarn-sketches library from 2018 - Estimated time to upgrade to latest version is 1-x hours (actual time is: ? hours).

Streaming TDigest with flatMapGroupsWithState

by Benny Avelin and Håkan Persson

The idea with this sketch is to demonstrate how we can have a running t-Digest in a streaming context.

Arbitrary stateful aggregations in streaming

We have two stateful operations, the first is mapGroupsWithState and flatmapGroupsWithState. The Databricks blog have a relatively good explanation of the two operations in their blogpost https://databricks.com/blog/2017/10/17/arbitrary-stateful-processing-in-apache-sparks-structured-streaming.html. However the concept is maybe not so easy to understand so I will try to give a simple explanation of what is going on with these two aggregations.

Structured streaming

For the purpose of this sketch we only need to know that new data will arrive as a batch, if we instead of a streaming dataframe just apply the aggregations on a dataframe then the entirety of the data will be in a single batch.

A running state

The way both mapGroupsWithState and flatMapGroupsWithState works is that we start with a key-value grouped datasets, when new data arrives it will be split into the groups prescribed by the key and each key will get a batch of data. The main important idea to realize is that for each key we have a running state, and there is no prerestriction to witch keys are ok and not so the number of keys can grow/shrink or whatever. If a new key appears, the first step in both mapGroupsWithState and flatmap... is to initialize a zero state before processing the first batch for this key, the next time a key appears it will have remembered the previous state and we can use the previous state and the added batch of data to compute the next state. What can a state be? Well an object of some class that has been predescribed, the simplest would be a running max/min/mean but also as we will see in this sketch a t-digest.

flatmapGroupsWithState vs mapGroupsWithState

The simple difference between these two can be infered from the name, but let us go into detail. If we are only interested in an aggregated "value" (could be a case class) from each key we should use mapGroupsWithState, however there are some interesting caveats with using mapGroupsWithState. For instance certain update-modes are not allowed as well as further aggregations are not allowed. flatmap... on the other hand can output any number of rows, allows more output-modes and allows for further aggregations, see the Structured Streaming programming guide.

Query typeOutput modeOperations allowed
mapGroupsWithStateUpdateAggregations not allowed
flatMapGroupsWithStateAppendAggregations allowed after
flatMapGroupsWithStateUpdateAggregations not allowed

Some streaming input

We need to have a streaming source for our example, this can be done in a number of ways. Probably there is some nice way to do this simply but the few methods I know to generate test-samples is to get a running loop that writes files with data, so that each time a new file arrives Spark will consider it as an update and load it as a batch. We have provided some code to generate points sampled from a normal distribution with anomalies added as another normal distribution.

import scala.util.Random
import scala.util.Random._
import scala.util.{Success, Failure}

// make a sample to produce a mixture of two normal RVs with standard deviation 1 but with different location or mean parameters
def myMixtureOf2NormalsReg( normalLocation: Double, abnormalLocation: Double, normalWeight: Double, r: Random) : (String, Double) = {
  val sample = if (r.nextDouble <= normalWeight) {r.nextGaussian+normalLocation } 
               else {r.nextGaussian + abnormalLocation} 
  Thread.sleep(5L) // sleep 5 milliseconds
  val now = (new java.text.SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS")).format(new java.util.Date())
  return (now,sample)
}

The /tmp folder

Databricks community edition has a file-number limit to 10000 and after running databricks for a while one will start to notice that things fail, and skimming the stacktrace of the failure we realize that we have reached said limit. Deleting files that one has created does not seem to solve the issue, well... this is because the /tmp folder counts into the limit and this is not cleared nearly as often as would be good for our work. Therefore we just clear it before starting our job...

ps. If you have not cleared the tmp folder before this might take some time actually. ds.

dbutils.fs.rm("/datasets/streamingFiles/",true) 
//dbutils.fs.rm("/tmp",true) // this is to delete the directory before staring a job
val r = new Random(12345L)
var a = 0;
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
// for loop execution to write files to distributed fs
//We have made a Future out of this, which means that it runs concurrently with what we do next, i.e. essentially it is a seperate thread.

val writeStreamFuture = Future {
  for( a <- 1 to 10){
    val data = sc.parallelize(Vector.fill(1000){myMixtureOf2NormalsReg(1.0, 10.0, 0.99, r)}).coalesce(1).toDF.as[(String,Double)]
    val minute = (new java.text.SimpleDateFormat("mm")).format(new java.util.Date())
    val second = (new java.text.SimpleDateFormat("ss")).format(new java.util.Date())
    data.write.mode(SaveMode.Overwrite).csv("/datasets/streamingFiles/" + minute +"_" + second + ".csv")
    Thread.sleep(50000L) // sleep 5 seconds
  }
}
r: scala.util.Random = scala.util.Random@27d25df7
a: Int = 0
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
writeStreamFuture: scala.concurrent.Future[Unit] = List()
display(dbutils.fs.ls("/datasets/streamingFiles"))
path name size
dbfs:/datasets/streamingFiles/18_44.csv/ 18_44.csv/ 0.0

AWS eventually consistent

The AWS distributed filesystem is eventually consistent, this can mean for instance that a file just created will not be possible to read and if we are unlucky the following code will fail to run.

import org.apache.spark.sql.types._
import java.sql.{Date, Timestamp}

/**
  * timedScore is the SQL schema for timedScoreCC, and the files written in the above code
  */
val timedScore = new StructType().add("time", "timestamp").add("score", "Double")
case class timedScoreCC(time: Timestamp, val score: Double) {
}

val streamingLinesDS = spark
  .readStream
  .option("sep", ",")
  .schema(timedScore)      // Specify schema of the csv files
  .option("MaxFilesPerTrigger", 1) //  maximum number of new files to be considered in every trigger (default: no max) 
  .csv("/datasets/streamingFiles/*").as[timedScoreCC]
import org.apache.spark.sql.types._
import java.sql.{Date, Timestamp}
timedScore: org.apache.spark.sql.types.StructType = StructType(StructField(time,TimestampType,true), StructField(score,DoubleType,true))
defined class timedScoreCC
streamingLinesDS: org.apache.spark.sql.Dataset[timedScoreCC] = [time: timestamp, score: double]

States and rows

To begin describing the code below, let us first look at what will be our running State. The isarnproject sketches packs the TDigest class into a TDigestSQL case class and provides encoders for this to be allowed in a Dataframe, therefore we can capitalize on this and use TDigestSQL as our running state (to be precise it is the TDigest wrapped by TDigestSQL that is the state but whatever.). The next thing to worry about is how should we output and what should we output? This example shows how to embed in a single row, the TDigest, the threshold value that comes from cdfInverse(0.99) and the actual data that is above the threshold. To do this we create a case class which will be the template for our row, in the code below it is called TdigAndAnomaly.

updateAcrossBatch

This is our main update-function that we send as a parameter to flatmapGroupsWithState.

  • It takes as first input the key-value, which we will not care about in this example and is just a dummy for us.
  • The second input is the inputs : Iterator[timedScoreCC], this is an iterator over the batch of data that we have recieved. This is the type-safe version, i.e. we know that we have a Dataset[timedScoreCC], if we dont and we instead have a DataFrame = Dataset[Row], we have to use inputs : Iterator[Row], and we have to extract the columns of interest cast into the appropriate types.
  • The third input is the running state variable, this is always wrapped in a GroupState wrapper class, i.e. since TDigestSQL was our state we need to have GroupState[TDigestSQL] as oldstate.
  • Lastly we have the output, which is an iterator of the case class chosen as outputrow, in our case this is Iterator[TdigAndAnomaly]

Each time a batch gets processed, the batch data is in the inputs variable. We first make sure that the state is either the previous state (if it exists) or we set it to a zero state. Then we simply process the batch one datapoint at the time, and each time calling updateTDIG, which simply updates the state with the new data point (tDigest add point). Once we have added all the points to the t-Digest, we can compute the updated value of threshold using cdfInverse(0.99), after that we simply filter the batch to obtain an iterator of the anomalies.

GroupStateTimeout

This is an interesting variable that you really should look into if you wish to understand structured streaming. Essentially it is the whole point of messing around with the structured streaming framework, see the programming guide.

import org.isarnproject.sketches._
import org.isarnproject.sketches.udaf._
import org.apache.spark.isarnproject.sketches.udt._
import org.isarnproject.sketches._
import org.isarnproject.sketches.udaf._
import org.apache.spark.isarnproject.sketches.udt._

case class TdigAndAnomaly(tDigSql:TDigestSQL, tDigThreshold:Double, time:Timestamp, score:Double)
//State definition

def updateTDIG(state:TDigestSQL, input:timedScoreCC):TDigestSQL = {
  //For each input let us update the TDigest
  TDigestSQL(state.tdigest + input.score)
}

import org.apache.spark.sql.streaming.{GroupStateTimeout, OutputMode, GroupState}
// Update function, takes a key, an iterator of events and a previous state, returns an iterator which represents the
// rows of the output from flatMapGroupsWithState
def updateAcrossBatch(dummy:Int, inputs: Iterator[timedScoreCC], oldState: GroupState[TDigestSQL]):Iterator[TdigAndAnomaly] = {
	// state is the oldState if it exists otherwise we create an empty state to start from
  var state:TDigestSQL = if (oldState.exists) oldState.get else TDigestSQL(TDigest.empty())
  // We copy the traversableOnce iterator inputs into inputs1 and inputs2, this implies we need to discard inputs
  val (inputs1,inputs2) = inputs.duplicate
  // Loop to update the state, i.e. the tDigest
  for (input <- inputs1) {
    state = updateTDIG(state, input)
    oldState.update(state)
  }
  //Precompute the threshold for which we will sort the anomalies
  val cdfInv:Double = state.tdigest.cdfInverse(0.99)
  // Yields an iterator of anomalies
  val anomalies:Iterator[TdigAndAnomaly] = for(input <- inputs2; if (input.score > cdfInv)) yield TdigAndAnomaly(state,cdfInv,input.time,input.score)
  //Return the anomalies iterator, each item in the iterator gives a row in the output
  anomalies
}

import org.apache.spark.sql.streaming.GroupStateTimeout

val query = streamingLinesDS
  .groupByKey(x => 1)
  .flatMapGroupsWithState(OutputMode.Append,GroupStateTimeout.NoTimeout)(updateAcrossBatch)
  .writeStream
  .outputMode("append")
  .format("console")
  .start()
query.awaitTermination()
-------------------------------------------
Batch: 0
-------------------------------------------
+--------------------+------------------+--------------------+------------------+
|             tDigSql|     tDigThreshold|                time|             score|
+--------------------+------------------+--------------------+------------------+
|TDigestSQL(TDiges...|7.9098819334928265|2018-01-30 07:18:...| 9.639219241219372|
|TDigestSQL(TDiges...|7.9098819334928265|2018-01-30 07:18:...|11.539205812425335|
|TDigestSQL(TDiges...|7.9098819334928265|2018-01-30 07:18:...| 9.423175513609095|
|TDigestSQL(TDiges...|7.9098819334928265|2018-01-30 07:18:...|  8.99959554980265|
|TDigestSQL(TDiges...|7.9098819334928265|2018-01-30 07:18:...|10.174199861232976|
|TDigestSQL(TDiges...|7.9098819334928265|2018-01-30 07:18:...|10.442627838980057|
|TDigestSQL(TDiges...|7.9098819334928265|2018-01-30 07:18:...|10.460772141286911|
|TDigestSQL(TDiges...|7.9098819334928265|2018-01-30 07:18:...|11.260505056159252|
|TDigestSQL(TDiges...|7.9098819334928265|2018-01-30 07:18:...| 9.905282503779972|
|TDigestSQL(TDiges...|7.9098819334928265|2018-01-30 07:18:...| 9.102639076417908|
+--------------------+------------------+--------------------+------------------+

-------------------------------------------
Batch: 1
-------------------------------------------
+--------------------+-----------------+--------------------+------------------+
|             tDigSql|    tDigThreshold|                time|             score|
+--------------------+-----------------+--------------------+------------------+
|TDigestSQL(TDiges...|9.553157173102415|2018-01-30 07:19:...| 9.695132992174205|
|TDigestSQL(TDiges...|9.553157173102415|2018-01-30 07:19:...|10.439052640762693|
|TDigestSQL(TDiges...|9.553157173102415|2018-01-30 07:19:...| 10.02254460606071|
|TDigestSQL(TDiges...|9.553157173102415|2018-01-30 07:19:...|  9.87803253322451|
|TDigestSQL(TDiges...|9.553157173102415|2018-01-30 07:19:...| 9.858438409632281|
|TDigestSQL(TDiges...|9.553157173102415|2018-01-30 07:19:...| 10.45683581285141|
+--------------------+-----------------+--------------------+------------------+

-------------------------------------------
Batch: 2
-------------------------------------------
+--------------------+-----------------+--------------------+------------------+
|             tDigSql|    tDigThreshold|                time|             score|
+--------------------+-----------------+--------------------+------------------+
|TDigestSQL(TDiges...|9.185194249546159|2018-01-30 07:20:...| 10.13608393266294|
|TDigestSQL(TDiges...|9.185194249546159|2018-01-30 07:20:...| 9.562663532092044|
|TDigestSQL(TDiges...|9.185194249546159|2018-01-30 07:20:...| 10.50152359072326|
|TDigestSQL(TDiges...|9.185194249546159|2018-01-30 07:20:...|10.061968291873699|
|TDigestSQL(TDiges...|9.185194249546159|2018-01-30 07:20:...|10.242131495863143|
|TDigestSQL(TDiges...|9.185194249546159|2018-01-30 07:20:...| 9.535096094790836|
|TDigestSQL(TDiges...|9.185194249546159|2018-01-30 07:20:...|11.012797937983356|
|TDigestSQL(TDiges...|9.185194249546159|2018-01-30 07:20:...| 9.841120163403126|
|TDigestSQL(TDiges...|9.185194249546159|2018-01-30 07:20:...|11.569770306228012|
|TDigestSQL(TDiges...|9.185194249546159|2018-01-30 07:20:...|10.947191786184677|
|TDigestSQL(TDiges...|9.185194249546159|2018-01-30 07:20:...|10.380284632322022|
|TDigestSQL(TDiges...|9.185194249546159|2018-01-30 07:20:...|10.399812080160988|
|TDigestSQL(TDiges...|9.185194249546159|2018-01-30 07:20:...| 10.47155413079559|
+--------------------+-----------------+--------------------+------------------+

-------------------------------------------
Batch: 3
-------------------------------------------
+--------------------+-----------------+--------------------+------------------+
|             tDigSql|    tDigThreshold|                time|             score|
+--------------------+-----------------+--------------------+------------------+
|TDigestSQL(TDiges...|9.111097583328926|2018-01-30 07:21:...|11.028282567178604|
|TDigestSQL(TDiges...|9.111097583328926|2018-01-30 07:21:...| 9.801446956198197|
|TDigestSQL(TDiges...|9.111097583328926|2018-01-30 07:21:...| 9.349642991847796|
|TDigestSQL(TDiges...|9.111097583328926|2018-01-30 07:21:...|10.446018187089411|
|TDigestSQL(TDiges...|9.111097583328926|2018-01-30 07:21:...|10.735315117514041|
|TDigestSQL(TDiges...|9.111097583328926|2018-01-30 07:21:...|11.160788156092288|
|TDigestSQL(TDiges...|9.111097583328926|2018-01-30 07:21:...| 9.741913362611065|
|TDigestSQL(TDiges...|9.111097583328926|2018-01-30 07:21:...|10.031203472330613|
|TDigestSQL(TDiges...|9.111097583328926|2018-01-30 07:21:...| 9.310488974576659|
|TDigestSQL(TDiges...|9.111097583328926|2018-01-30 07:21:...|10.669624608178813|
+--------------------+-----------------+--------------------+------------------+

-------------------------------------------
Batch: 4
-------------------------------------------

Have fun

Arbitrary stateful aggregations are very powerful and you can really do a lot, especially if you are allowed to perform aggregations afterwards (flatmapGroupsWithState with Append mode). This is some really cool stuff!