Research Skills – Big Data 1
Lecture 1 – Parallel and distributed computing
What is big data? no fixed definition, concept changes over time. But, data becomes big data when
it becomes too large or too complex to be analysed with traditional data analysis software. This
can be noticed when analysis becomes too slow or too unreliable, when systems become unresponsive,
or when day-to-day businesses are impacted.
Changes over time:
- In the past, storage was expensive. Now, storage is relatively cheap and easy
- In the past, only the most crucial data was preserved. Now, companies and governments
preserve huge amounts of data and more data is generated (i.e. customer information,
historical purchases, GPS trajectories, etc.)
- In the past, companies only consulted historical data, and did not analyse it. Now, more
companies and governments rely on data analysis such as event prediction and fraud detection.
Data analysis is computationally intensive and expensive. Examples:
- Online recommender systems: require instant results
- Frequent pattern mining: time complexity exponential in the number of different items,
independent of the number of transactions (e.g., market basket analysis)
- Multi-label classification: exponential number of possible combinations of labels to be
assigned to a new sample (e.g., Wikipedia tagging)
- Subspace clustering: exponential number of possible sets of dimensions in which clusters
could be found (e.g., customer segmentation)
The three aspects of big data:
- Volume – the actual quantity of data that is gathered
o Number of events logged, number of transactions, number of attributes, descriptions
- Variety – the different types of data that is gathered
o Some attributes may be numeric, other textual, structured vs. unstructured, irregular
timing (sensor data has regular time intervals, accompanying log data are irregular)
- Velocity – the speed at which new data is coming in and the speed at which data must be
handled
o May result in irrecoverable bottlenecks
Solutions for big data: invest in hardware and use intelligent algorithms.
The goal of parallel computing: leveraging the full potential of your multicore multiprocessor
multicomputer system. The goal or parallel processing is to reduce computation time.
Embarrassingly parallel – when an algorithm is split into smaller parts that can run in parallel. Each
chunk runs simultaneously, speeding up the process (linear speedup). When parallel processing is
done, the chunks are re-joined. Data processing is usually embarrassingly parallel, assuming there is
no communication necessary between the workers. Example:
- Define a function that takes as input the html source file of a web page and returns the main
article’s text
o Instead of applying the function sequentially to each web page, you can define several
workers that each apply the function to a large number of web pages simultaneously
, Research Skills – Big Data 2
Linear speedup – executing two tasks in parallel on two cores, should halve the running time.
Speedup is the ratio between serial and parallel execution time.
Task parallelism – multiple tasks are applied on the same data in parallel.
Data parallelism – a calculation is performed in parallel on many different data chunks.
Analysis of parallel algorithms
- Parallelization can speed up the execution time of an algorithm, but it does not change its
complexity
- When analysing parallel algorithms one is mostly interested in the speedup that is gained
- Typically one has to take into account the overhead of the parallelization, but the optimal
speedup one can get is linear, i.e. if work is divided over 4 processors, then execution time
would be at most 4 times faster.
- A lot of operations involving large amounts of data are embarrassingly parallel. However, in
general there are data or procedural dependencies preventing a linear speedup
A linear speedup is often not realized, because often a part of the algorithm cannot be parallelized.
Assume:
- T is the total serial execution time
- P is the proportion of the code that can be parallelized
- S is the number of parallel processes
Data dependencies preventing you to realize a
linear speedup. The case where the input of a
segment of code, depends on the output of
another piece of code.
Branches prevent you to realize a linear
speedup. The case where the execution of a
segment of code, depends on a logical
condition.