LAB2 - MaxGolden/CS5551_Team13_LabAssignments GitHub Wiki
-
Code Link:
20_Hongkun Jin_GitHub
14_John C Goza_GitHub -
YouTube Link: LAB2_Video
The application must have a sign up and login activities. Use any of the cloud services like Firebase for the Real Time storage
- This is our project welcome page and it shows the
login
andSignup
button on the page, which are easy to complete.
<div padding>
<button ion-button block (click)="signup()">{{ 'SIGNUP' | translate }}</button>
<button ion-button block (click)="login()" class="login">{{ 'LOGIN' | translate }}</button>
</div>
- Just the buttons adding
- This is our login page looks like, it contains the login email address and password input, also user can use their Google account or other account to login the app (As far as now, we only provide the Google OAuth login)
- Like what I mentioned, this is the Google OAuth login option.
doGoogleLogin(){
let nav = this.navCtrl;
let env = this;
let loading = this.loadingCtrl.create({
content: 'Please wait...'
});
loading.present();
this.googlePlus.login({
'scopes': '',
'webClientId': 'YOUR WEB_CLI_ID',
'offline': true
})
.then(function (users) {
loading.dismiss();
env.nativeStorage.setItem('user', {
Google_name: users.displayName,
Google_email: users.email,
Google_picture: users.imageUrl
})
.then(function(){
nav.push(SettingsPage);
}, function (error) {
console.log(error);
})
}, function (error) {
loading.dismiss();
});
}
- The code is for Google OAuth login, the cordova plugin we use is
GooglePlus
- The email/password authentication in the firebase
- Then, click the
Login
button
async doLogin(user: User) {
try {
const result = this.afAuth.auth.signInWithEmailAndPassword(user.email, user.password);
if (result) {
this.navCtrl.setRoot('ProfilePage');
}
}
catch (e) {
console.error(e);
}
}
- The code is for Firebase Authenticaton, very obvious.
- You can upload your profile after login with your account, like profile photo, username, etc.
ProsaveResults(imageData) {
this.UploadPicItems = imageData;
this.afAuth.authState.take(1).subscribe(data => {
// @ts-ignore
this.SaveProPicItems = this.afDatabase.object(`profile/${data.uid}`).set({imageData: imageData})
.then(_ => { })
.catch(err => { this.ProshowAlert(err) });
})
}
- The
imageData
from above is output by the camera, and then upload to the firebase under user'sauth.uid
by these lines.
- Then, Firebase with record all the info you input for your profile
createProfile() {
this.afAuth.authState.take(1).subscribe(auth =>{
this.afDatabase.object(`profile/${auth.uid}`).set({Profile: this.profile, imageData: this.UploadPicItems})
.then(()=> this.navCtrl.setRoot('TabsPage'))
})
}
- The code is for Firebase uploading everything you input for your profile.
- And in the user page, you can just see your profile photo and your username, looks very nice.
this.afAuth.authState.take(1).subscribe(data =>{
if(data) {
this.profileData = this.afDatabase.object(`profile/${data.uid}`)
}
else {
this.toast.create({
message: 'Please Login',
duration: 1000
}).present();
}
});
- To get your profile from your own account
- This is the sign up page, and it includes email, password, account login from others too.
- When you create account by yourself, and firebase will record the account you registered.
async doSignup(user: User) {
try {
const result = await this.afAuth.auth.createUserWithEmailAndPassword(user.email, user.password);
if (result) {
this.navCtrl.setRoot('TabsPage');
}
}
catch (e) {
console.error(e);
}
}
- Create your account by using firebase.
Use any Machine Learning API of your choice: [Watson, GoogleML, AWS]
- After taking a product picture in search page of our APP, the Google Vision Api will give user the web entity analysis for the product image. For example, this product is a sony headset, and Google even give user what this is, I mean, Google says " This is Sony WH1000XM3"!
constructor(public http: HttpClient) {
console.log('Hello GoogleCloudVisionServiceProvider Provider');
}
getLabels(base64Image) {
const body = {
"requests": [
{
"image": {
"content": base64Image
},
"features": [
{
"type": "WEB_DETECTION"
}
]
}
]
};
return this.http.post('https://vision.googleapis.com/v1/images:annotate?key=' + environment.googleCloudVisionAPIKey, body);
}
- We were using the "WEB_DETECTION" as the type we request from Google Cloud Vision Api.
Use any of the smart phone hardware features like Sensors/Camera/Media/Connectivity/Maps etc.
- The
Camera
is the one we were using all the time, for profile, for searching, etc. Example:
text: 'From Camera',
handler: async () => {
try {
const options: CameraOptions = {
quality: 50,
targetHeight: 600,
targetWidth: 600,
destinationType: this.camera.DestinationType.DATA_URL,
encodingType: this.camera.EncodingType.PNG,
mediaType: this.camera.MediaType.PICTURE,
};
this.camera.getPicture(options).then((imageData) => {
// @ts-ignore
this.ProsaveResults(imageData);
}, err => {
this.ProshowAlert(err);
});
}
catch (e) {
console.error(e);
}
}
},
- This code is we were using the camera to take a pic and use
this.ProsaveResults
to progress
Complete the following tasks with Ionic Framework:
- Our team project is based on the IONIC 3
The Page should be Mashup of at least Two Web Services (refer to the web services from the spreadsheet). One of them should be from the list of knowledge/Machine learning/AI services and that you haven't used in your previous work.
- This page includes the Price Api from walmart lab and Google Cloud Vision. And we already talked about the Google Vision which is not used in our project before.
- The another price api we using for product, it will gives the product price info when you type some word of the product name.
searchapiItems(apiItemsName) {
var url = 'http://api.walmartlabs.com/v1/search?apiKey=vwtzj6yrpv53yrp62squshbm&lsPublisherId={Your%20LinkShare%20Publisher%20Id}&query='+encodeURI(apiItemsName);
this.data = this.http.get(url).map(ite => (<any>ite));
var response = this.data;
return response;
}
- The code for API request part
Write at least 3-unit test cases related to your application
import { async, TestBed } from "@angular/core/testing";
import { IonicModule, Platform } from "ionic-angular";
import { StatusBar } from "@ionic-native/status-bar";
import { SplashScreen } from "@ionic-native/splash-screen";
import { StatusBarMock, SplashScreenMock, PlatformMock } from "ionic-mocks-jest";
import { MyApp } from "./app.component";
import { Settings } from "../providers";
export class SettingsServiceStub{
public get(key: any): any {
Observable.of(key);
}
}
describe("MyApp Component", () => {
let fixture;
let component;
beforeEach(
async(() => {
TestBed.configureTestingModule({
declarations: [MyApp],
imports: [IonicModule.forRoot(MyApp),
],
providers: [
{ provide: StatusBar, useFactory: () => StatusBarMock.instance() },
{ provide: SplashScreen, useFactory: () => SplashScreenMock.instance() },
{ provide: Platform, useFactory: () => PlatformMock.instance() },
{ provide: Settings, useClass: SettingsServiceStub }
]
});
})
);
beforeEach(() => {
fixture = TestBed.createComponent(MyApp);
component = fixture.componentInstance;
});
it("should be created", () => {
expect(component instanceof MyApp).toBe(true);
});
it("should have 14 pages", () => {
expect(component.pages.length).toBe(14);
});
it("root should be tutorial", () => {
//console.log(component);
expect(component.rootPage).toBe('TutorialPage');
});
});
- These are the test code for our application
- tests run and pass
For testing we integrated jest into our ionic package along with the jest-preset-angular npm package, which helped the tests work without challenging manual configuration. After installing jest we created npm scripts to trigger the test suite and configured our tsconfig files to point towards the code we wanted tested and exclude non-testable directories. We then wrote three tests specific to our code:
-
- that the app stood up and created an instance of the MyApp component
-
- that the MyApp component contained the accurate number of pages (in our case there are 14)
-
- that the root component of the stood up app pointed to our helpful tutorial. Without a doubt, the hardest part of integrating tests into our code was establishing mocks and removing services that do not have good mocks defined by community code. We could have created a mock for one such service ourselves, but it would have taken extensive work and research into mocking both angular services and stores. Thus, addition of tests required the removal of our Translate service.
Use at least one Cordova plugin related to your project
- The plugins we are using
YSlow is not under maintenance
- While yslow has not been maintained for many years, we still felt it would be possible and beneficial to analyze our app's load times. To accomplish this we installed the "page load time" extension for Chrome. It provides timed analysis of the loading each of each page component. On running the app, we found that routing to the app (including dns and request time) took 0 ms, which makes sense since the app is loaded on the machine running the test. Loading the domain object model took 1015 ms, which was the vast majority of our load time. With the complexity and wonderfully immersive UI the app has right now, a load time of 1015 for the DOM is well within bounds
Edited by John C Goza (Jack) & Hongkun Jin (Max)