I'd recommend keeping the data in HDFS and converting it to the Parquet file format. Parquet uses a concise, columnar representation of nested data and will reduce the I/O required for many of your queries.
Once your data is in the Parquet format, I'd use Impala to issue SQL queries against the data. Impala implements a highly efficient execution engine for SQL queries over data stored in HDFS. Impala queries will return results to your dashboard with low latency. Unlike Hive, the Impala execution engine doesn't rely on Hadoop's MapReduce implementation.
If you have text data that you'd like to view on the dashboard, I'd recommend Cloudera Search for indexing it. Cloudera Search is a version of Solr Cloud that stores and serves partitioned Lucene indices out of HDFS.
It's quite trivial to install Impala and Search with Cloudera Manager. Cloudera Manager is a free software tool that provides an in-browser GUI for installing and managing Cloudera and related third-party software. If you install and manage your cluster with Cloudera Manager, you don't have to worry about tuning your configuration or ensuring cross-version compatibility between HDFS, Parquet, and Impala.
To try out your new cluster, you may want to use Cloudera Manager to install Hue as well. Hue provides a web-based GUI for end users of Cloudera and related third-party software. From Hue you can explore the data in HDFS and issue SQL or keyword search queries over your data.
For an example of an interactive dashboard built with D3 that uses Cloudera Impala and Search on the backend, check out Zoomdata. This video is a wonderful demonstration of the interactive capabilities of Impala and Search.
If you'd like to use Tableau, Cloudera makes a connector for Tableau available that works with Impala.
Note that the already exceptional performance of Impala for small data sets will be aided by the upcoming in-memory cache that's being added to HDFS with our next release.