The model is generated by libsvm on the training data and tuned SVM parameters

The system is divided into two parts. The first part is focused on storage and transmission of the EEG data for two different events: when the subject’s eyes are open and when eyes are closed. From researching brain waves, it was discovered that brain activity would be different between the two events. After careful study and testing, the conclusion was drawn that when the eyes are open, the Alpha and Beta wave features which are generated from the raw EEG signal are high. When the eyes are closed, Alpha and Beta features were low. However, these features were not enough to give reliable and appropriate results for the classification of these two events. Therefore, we tried to use different features, adding more features to the system to increase the reliability of the system. After working with one feature and two features, we extended the system to work with four different brain wavefeatures in the range of 0.5 to 30 with a sampling rate of 512. These features are Delta, Theta, Alpha and Beta.An Edge class is a powerful WuClass that has internal data storage and time series data processing capability. Like every WuClass, an edge class should be assigned a unique WuClass ID, which should be refered to in WuKong Standard Library.xml in master. In the EEGEdge Class, square pots the ID is explicitly defined by the WuClass annotation, so that the object instance of EEGEdgeClass can be discovered by the master.

According to the property declaration of WuClass in WuKong Standard Library.xml, a developer needs to define inputs and outputs for the edge class, and use the annotations for theedge framework to help developers to declare what kind of data structure they want to use to temporarily store upcoming time series data. The input property is declared as a buffer , whose data ring capacity is 2000 data points, index ring capacity is 30 units, and index is built every 1000 miliseconds. Therefore, the buffer will hold data in a time window of 30 seconds; it will at most keep 2000 data points. The buffer will store raw signal from EEG Device from which time series operators will fetch data and generate features. For every output property, an Edge class needs to define the corresponding set function. So, that a written WKPF property message will be generated and sent out when its value is updated. In the example, output is a property of output, whose property is 1. After defining properties as inputs and ouputs of the intelligent component, we implement two important methods of Edge Class. The register extension function will return a list of extensions, each of which is a handler of a specific stage in the data processing pipeline. In the EEG Edge Class, we registered EEG Feature Extractin Extension and EEG Execution Extension, both of which will be introduced in detail later. Beside providing data processing pipeline, the edge server also provides the communication pub/sub capability, so that an offline model can be initialized and updated through a pub/sub topic. model is loaded locally, so that there is no topic needed.Since models are offline trained in the EEG study, we ignore the online learning extension, and only focus on how to use models to do online classfication on the features extracted. Here, the EEG Execution Extension implements both Executable Number and Initiable.

Within the init function, we firstly load the model from local file system. The execute function accepts features in a list as the first parameters, and execution context as second parameter. Within the function, we use the model to classify whether we should label the features as eyes close or eyes open. Once we can know the probability of both eye close and eye open, we set 0.88 as the probability threshold to really trigger eyes close action by setting the output value. We tested the application on real physical devices.The importance of emotions lies in every-day-human-to-human communication and interaction. Understanding and recognizing human emotional status play an important role in human communication. Human-computer interface can play the same role of the human being to understand and recognize human emotions, and adjust its setting to fit with their human emotions. There are different approaches detecting and recognizing human emotions. First, facial expression is one of the earliest techniques used to detect human emotions and voice recognition based on the voice tone can detect emotions. However, these techniques are susceptible to deception, and vary from one situation to another. Second, Physiological signals are also used to detect emotion such as electrocardiogram and respiration. This approach provides more complex information than what is needed to detect emotion. Third, brain wave signal is used to detect emotion such as electroencephalogram , Electrocorticography and Functional magnetic resonance imaging . The advantages of using brain wave are its strong relevance for emotions and that it is not proneto deception.There is a spectrum of emotions which can differ from one person to another. There are different models presented by the research community to model human emotions. One model presents the basic emotions as happiness and sadness, another model presents the basic emotions as fear, anger, depression and satisfaction. Another model uses multiple dimensions or scales for emotion categorization. In this model emotion characterize by two main dimensions, valence and arousal.

Valence emotions range from positive to negative whereas arousal emotions range from low to high. For instance, fear can be defined as a negative valence and high arousal, whereas excitement can be defined as a positive valence, and high arousal. Figure shows the dimensional model of emotions.There are different type of stimuli to trigger emotion: self-eliciting, recalling, and using external stimulus. In order to stimulate the subject’s emotion in this thesis, I used excerpt videos from Liris-accede library. LIRIS-ACCEDE library has around 9800 good quality video excerpts used to induce different types of human emotions. This thesis used the library videos in the first trial and in the second trial. The third trial used a different resource to trigger emotions through videos that has marketing background. Only two emotions captured in the third trial which are High valence-High arousal, Low-valence and Low arousal. Figure shows some images from these different video excerpts.The aim of this project is to investigate the ability of a low-quality cheap commercial EEG Headset to classify different mental tasks. The EEG signal in this thesis was acquired using a single electrode placed on the forehead. The EEG signal was sent to the computer via Bluetooth. In order to integrate the EEG headset with the Wukong framework, large plastic pots the EEG Server Wuclass has been designed and built for the first two applications. Also, in order to build edge classification, the EEG edge WuClass has been designed and built for the third application.In the first two applications, this thesis intends to integrate the EEG headset with the Internet of Things framework ”WuKong” to allow the user to control external applications, such as turning lights on/off, playing music, etc. Two applications is built one application aimed to assist old people to do different tasks and the second application aimed to read a mind in an office. In the Old People Assistant Application, the system was built using two sensors. The first sensor is the camera, which captures eye blinking using the Haar algorithm from OpenCV library to classify and detect the eye blinking. The second sensor is the EEG headset which capture eSense signals, defined as attention and meditation signals and their values ranging from 0 to 100. Different patterns are built based on the attention or meditation signal values. In order to change from one state to another, or to keep on a certain state, the number of eye blinks has to be detected by the system. The result are shown in Chapter 3 for switching between 2 LEDs, but they can be extended to play different types of music and switch different LEDs, as shown in Figure 3.2. The second application is designed to show the mind state of a person inside in an office. In this application two WuClass have been built,the first is the EEG server WuClass which receives EEG signals, builds different patterns, and generates different index number for different pattern. The second Wulclass is the LED strips WuClass which receives the index number and plays the patterns that corresponds to the index number. In this application both eSense signals send to two EEG server in WuKong,then the WuKong framework has the ability to receive two different signals, attention and meditation.The third application is called classification eye states on the edge. In order to do this, the first step is to capture the raw EEG signal from EEG headset which done using a python code. The second step to build this application was the preprocessing steps that are applied to the collected EEG signals, including Fast Fourier Transform, removing EMG and EOG artifacts, and segment the EEG to one second segments.

The third step was to extract the EEG features, namely the power spectrum density features. In this application three different subjects volunteer to give their EEG signal, which acquired, preprocessed, and extracted the power spectrum density features. After collecting the EEG data we applied different machine learning algorithms in order to classify between two different eye states. We Used the accuracy of the classifier as our evaluation measurement. We reach 90% accuracy to classify between two different eye states using Support Vector Machine with Radial Basis Function kernel as our classifier. In order to use this classifier online in real time, we used the progression WuKong framework which supports intelligent edge. We built an EEG Edge class which has two components: first, feature extraction extension that uses relative intensive ration operator to calculate the intensive ration of EEG features; second, execution extension that loads the model which was generated offline, and based on the on the probability of the eye events, the threshold was chosen to be 0.88 to trigger an action when the eyes are closed.The emotions application presented in Chapter 4. The emotions application was implemented in order to classify between two different types of emotion. Using video clip from LIRIS-ACCEDE library which is designed to trigger these different emotion. Six subjects participated in this experiment which lasted for 60 seconds for high valence, high arousal and another 60 seconds for low valence, low arousal. Different types of features are extracted from the collected EEG signals, included time,frequency and nonlinear features, and 24 of these features are selected in order to use them with machine learning algorithms. Different machine algorithms are applied on the collected data, and the average accuracy using Linear discernment analysis was 79.31%. The model was evaluated using confusion matrix, which showed the diagonal is darker than the others, and those implies that the classifier is a good classifier . The second method used to evaluate the model was f1-score, which show an average of 83% for the subject number 4. The last method used to evaluate the model was Receiver Operating Characteristics which show the area under the curve with 91% for subject number 4. The last method used to evaluate the model was the learning curve in which the learning curve showed a misclassification range from 12% to 17% for the subject number 4.This thesis studies and discusses the ability of low-cost headset EEG to classify different eye events, and different mantel tasks such as attention, meditation and emotions. Table 5.1 compares different research literature that applied eyes events classification, using different EEG headset ranging from 4 – 14 electrodes with a cost of $799. Different features and classifiers are used to classify eye events. The accuracy ranges from 73% to 95%. The thesis presents a different approach to classify eyes events using low cost EEG headset that does not exceed $100. The power spectrum density features extracted , and Support Vector Machine with Radial basis function is used for classification. In this thesis, an accuracy of 90% is obtained to classify between eyes open and eyes close. With 90% accuracy of this work using only one electrode, and only cost of $99 is a better solution to detect the different eye states compared to other studies shown in Table 5.1.Table 5.2 shows data from research literature that discusses emotions classification using different EEG headsets.