Paf Paris edited untitled.tex  over 8 years ago

Commit id: cd8c24f5f0114bf82cf1084d00ca29aedaba4974

deletions | additions      

       

\subsection{The TPC-E Benchmark}  Since we are interested in enterprises that involve transaction processing, we used the TPC-E benchmark \cite{tpce}. Other benchmarks for OLTP are TPC-C, which is ccc, and TPC-W which is now obsolete \cite{tpc-homepage}.  The TPC-E benchmark uses a database to model a brokerage firm with customers who generate transactions related to trades, account inquiries, and market research. The brokerage firm in turn interacts with financial markets to execute orders on behalf of the customers and updates relevant account information. The benchmark is “scalable,” meaning that the number of customers defined for the brokerage firm can be varied to represent the workloads of different-size businesses. The workload includes several OLTP queries of variable complexity, as well as different processing and memory demands, representing similar to real workload characteristics. The benchmark defines the required mix of transactions the benchmark must maintain.  Evaluation The TPC-E benchmark has a fixed workload of reads and writes that do not fit our demonstration goals.   For this reasons we created three different combinations of workloads having different read-write ratios. The ratios chosen are 0\%, 5\%, 10\%, and 30\% writes in the total workload.  The TPC-E scaling parameters were chosen as follows: 5000 customers, 1 working day of populated transactions and a scale–factor of 500.  To evaluate our model we chose the TPC-E benchmark \cite{tpce}, an OLTP oriented workload designed by the Transaction Processing Performance Council. The workload includes several OLTP queries of variable complexity, as well as different processing and memory demands, representing similar to real workload characteristics.  (...more on TPC-E...)  The TPC-E scaling parameters were chosen as follows: 1000 customers, 1 working day of populated transactions and a scale–factor of 500.  \subsection{System Setup}  For the experiments, a group of machines was used to host the different entities of Hihooi. For each component (Manager, Listener, Primary DB, and extension DBs) a dedicated machine was used. All machines shared the same configuration (m4.large) and were deployed in AWS in a local LAN.  Before starting any experiment, all databases were always reset to an initial condition, to ensure that every experiment started from the same, constant state. During the experiments, all transactions involvig a \textit{write} were executed within a \emphasis{START TRANSACTION} - \emphasis{COMMIT} BLOCK.  \subsection{Workload Mix}\label{wmix}  The TPC-E benchmark has a fixed workload of reads and writes that do not fit our demonstration goals. For this reasons we created three different combinations of workloads having different read-write ratios. The ratios chosen are 0\%, 5\%, 10\%, and 30\% writes in the total workload.  \subsection{Part 1. Performace and Scalability}  The first part of the evaluation analyzes performance and scalability. Hihooi was compared to a reference system consisting of a single PostgreSQL instance. We measured the performance of Hihooi in different configurations, from 1 to 8 extension DBs. Each setup was tested with 3 different workload mixes, as mentioned in \ref{wmix}.