0笔记 - wbwangk/wbwangk.github.io GitHub Wiki

雾计算和物联网

原文地址
本地地址:C:\Users\wbwang\Desktop\SBG大数据研讨\大数据2\computing-overview.pdf.htm (用谷歌翻译自动翻译为中文)

golan

share3上有golang环境 /root/.bashrc:

export GOROOT=/usr/local/go
export PATH=$PATH:$GOROOT/bin
export GOPATH=/go

phantomjs

phantomjs可以解析网页及执行网页中的js。phantomjs可生成网页截图。

readthedocs.org

readthedocs.org是一个大型,免费的Web服务

http://docs.sensorbee.io/en/latest/index.html 上面这个文档由下面的服务生成: http://readthedocs.org/projects/sensorbee/

文档源码位于: https://github.com/sensorbee/docs/blob/master/source/index.rst、

前端调试

  • jsbin.com
  • jsfiddle.net

WiFi模块

WiFi模块 ESP8266串口转WiFi /无线透传/工业级/安信可/ESP-12S https://world.taobao.com/item/536873562220.htm?spm=a312a.7700714.0.0.flDmTN#detail

大数据

qubole:hadoop as a service
https://github.com/sequenceiq/cloudbreak: hadoop as a service plot.ly:大数据可视化云服务
github ckan/ckan项目:数据门户和数据管理;试运行ckan/ckan docker镜像。
data.world可以作为数据开放门户的对标;对数据集有分类,可定义多个标签,可下载数据集。
40g的vagrant box,https://github.com/chef/bento

IDAP

电商自己部署的环境:http://10.0.14.1:8090/idap/ 官方演示环境:idap.inspur.com 上面两个环境的用户名口令都是:superamdin/superadmin

public JSON storage

The public JSON storage

Free, open-source service to store JSON data through a RESTFul API.
No signup or authentication needed.

Jekyll文档

http://idratherbewriting.com/documentation-theme-jekyll/index.html
很详细。

docpad

https://github.com/docpad/docpad
通过布局,元数据,预处理器(markdown,jade,coffeescript等)增强您的网站前景,部分,骨架,文件观看,查询和惊人的插件系统。DocPad将简化您的Web开发流程,使您能够比以前更快地制作全功能的网站。 https://docpad.org

beautiful-docs

https://github.com/PharkMillups/beautiful-docs
Pointers to useful, well-written, and otherwise beautiful documentation.

在线LaTeX 公式编辑器

http://latex.codecogs.com/eqneditor/editor.php
利用上述工具输入下面的代码:

r=\frac{{}\sum (x-\bar{x})(y-\bar{y}))}{(n-1)s_{x}s_{y}}

输出的皮尔逊相关系数公式:

Math on GitHub Pages

http://g14n.info/2014/09/math-on-github-pages/
使用LaTeX在GitHub Pages上显示数学公式。描述了两种方法,一种是使用MathJax来呈现公式客户端,另一种使用KaTeX来呈现公式服务器端。
使用html标签: E = mc2 来生成上标。 Log2这是下标。 $$ M = \left( \begin{array}{ccc} x_{11} & x_{12} & \ldots \ x_{21} & x_{22} & \ldots \ \vdots & \vdots & \ldots \ \end{array} \right) $$

vagrant snapshot

snap3是安装kerberos后,KDC装在u1404上。
snap4是安装ranger后,ranger装在u1402上。
重装vagrant9后的snapshot(3g内存、无swapfile):
u140[1-3].1是初始安装,未启用kerberos;
u140[1-3].2是启用kerberos后;
u140[1-3].3是安装了ranger后;
u140[1-3].4是安装了hive后;
u140[1-3].5是配置好ranger插件后(HDFS和Hive插件);
u140[1-3].6是安装knox后;
u140[1-3].7是禁用了kerberos后;
u140[1-3].8knox+ldap认证;
u140[1-3].9删除了ranger;
u140[1-3].10删除了hive,有ranger/knox无hue,禁用了kerberos u140[1-3].11包含了hive/hbase/zeppelin/spark/knox,无HUE。

HDP的一些路径

knox: /usr/hdp/current/knox-server
keytab: /etc/security/keytabs/
log: /var/log/hadoop/, /var/log/hadoop/hdfs/
java: /usr/jdk64/jdk1.8.0_77/

Hadoop Auth(SPNEGO和基于代理令牌的身份验证)与Apache Knox

链接 含有knox Advanced topology配置例子。

ubuntu14 desktop

base box 'box-cutter/ubuntu1404-desktop'

apt-get nginx proxy

(参考)nginx正向代理配置:

server {  
    resolver 202.102.128.68; #指定DNS服务器IP地址  
    listen 8098;  
    location / {  
        proxy_pass http://$http_host$request_uri; #设定代理服务器的协议和地址  
    }  
}  

在使用apt-get之前执行:

$ export http_proxy=http://nginx所在机器ip:8098

然后再执行apt-get就飞快了. 用同样的命令可以让VM翻墙。用ipconfig查看一下宿主机的IP,然后通过翻墙软件的界面查看到代理端口,将8098改成这个端口号。

$ git config --global http.proxy http://nginx所在机器ip:8098

http.proxy还可以换成https.proxyall.proxy

xargs

xargs是一个非常强大的命令,可以将一个命令的输出,作为另一个命令的参数。

$ cat url-list.txt  |  xargs wget –c

JDK

卸载方法1:

java -version
yum list java*
yum remove java-1.6.0-openjdk
yum remove java-1.7.0-openjdk

卸载方法2:

$ rpm -qa | grep java      或  rpm -qa | grep jdk
java-1.8.0-openjdk-headless-1.8.0.131-0.b11.el6_9.x86_64
$ rpm -e --nodeps java-1.8.0-openjdk-headless-1.8.0.131-0.b11.el6_9.x86_64

Oracle Java下载页面下载RPM包。直接用wget不行,先下载到windows下,然后进git bash,然后用scp复制到centos中(命令如scp jdk-8u131-linux-x64.rpm root@c6801:/opt)。

$ rpm -ivh jdk-8u131-linux-x64.rpm  或  yum install jdk-8u131-linux-x64.rpm

make输出到文件

正常信息到1.txt,错误信息到2.txt:

make xxx 1> 1.txt 2>2.txt

所有的信息都输出到同一个文件3.txt中:

make xxx > 3.txt 2>&1

Ambari osFamily

<osFamily>any</osFamily>
<osFamily>redhat7,amazon2015,redhat6,suse11</osFamily>
<osFamily>suse12</osFamily>
<osFamily>debian7,ubuntu12,ubuntu14,ubuntu16</osFamily>

knox / hue

1
Knox is not a HUE replacement, Knox is a REST Gateway, reads and writes HTTP request/response. HUE is a GUI that reads HTTP but talks RPC with the cluster.

安全断言标记语言(SAML)

SAML规范定义了三个角色:主体(通常为用户),身份提供者(IdP)和服务提供商(SP)。在SAML处理的用例中,主体向服务提供商请求服务。服务提供商请求并从身份提供者获取身份断言。在这种断言的基础上,服务提供商可以做出访问控制决定 - 换句话说,它可以决定是否为连接的主体执行一些服务。
在将身份断言传递给SP之前,IdP可以从主体请求一些信息,例如用户名和密码,以验证主体。SAML指定三方之间的断言:特别是断言从IdP传递到SP的标识的消息。在SAML中,一个身份提供商可以向许多服务提供商提供SAML断言。同样,一个SP可能依赖和信任许多独立IdP的断言。
SAML没有在身份提供者处指定验证方法; 它可能使用用户名和密码或其他形式的身份验证,包括多因素身份验证。用户使用用户名和密码登录的目录服务(如LDAP,RADIUS或Active Directory)是身份提供者的典型身份验证令牌来源。流行的互联网社交网络服务也提供身份服务,理论上可用于支持SAML交换。

ubuntu桌面装chrome

参考

kerberos代理与SAML

freeIPA是一个集中身份管理系统。官网
Ipsilon可与freeIPA集成,提供联合SSO能力。
kdcproxy是MIT kerberos的https代理。参考

linux时区设置

修改时区的命令:tzselect
echo "TZ='Asia/Shanghai'; export TZ" >> ~/.profile

windows下使用kinit

参考
Single sign-on with OpenConnect VPN server over FreeIPA

centos7.0的快照

c700[1-4].2 安装了ipa server/client,用ipa充当kdc的kerberized集群。c7004上安装ipa-server

freeipa-letsencrypt

这两个脚本会自动获取和安装Let's Encrypt证书到FreeIPA的web接口。
项目地址

httpd+php

参考ubuntu官方文档
vm:/e/vagrant9/ambari-vagrant/ubuntu16.4/u1601

$ apt install apache2 php libapache2-mod-php

community.hortonworks.com/articles

【security】
Apache Ranger and HDFS
How to enable Knox proxying for Zeppelin
Ambari 2.4 Kerberos with FreeIPA
Configure Knox to access HBASE UI
Enable HTTPS for HDFS
Secured access to Hive using Zeppelin's jdbc(hive) interpreter on a Kerberized Ranger Hadoop security best practices & recommendations for PROD environment.
Kafka Authorization in secure environment
Configure Knox to access HDFS UI
One Way Trust - MIT KDC to Active Directory
Common Kerberos Errors and Solutions
Configure Knox to access AmbarI UI
Securing Your Clusters in the Public Cloud
Tag Based (Meta Data) Security for Apache Spark with LLAP, Atlas, and Ranger
HDF 2.0: Enable Ranger authorization for HDF components (Nifi, Kafka, Storm)
Apache Ranger and HDFS示范gif
How do I login to Zeppelin when Security is enabled using HDP 2.5 Tech Preview SandBox
Demystifying Delegation Token
Best Practices In HDFS Authorization with Apache Ranger
Kerberos: The Missing Guide
Security in "Enterprise Ready" Data Lake
Hive CLI Security

【Kerberos】
Configure Hive view for Kerberized Cluster
Connecting to HBase in a Kerberos Enabled Cluster
Manual Keytab / Principal creation for IPA to support Ambari Kerberos Wizard
Kafka SSL + KERBEROS - Cheat Sheet Settings/Console commands
Step by Step Recipe for Securing Kafka with Kerberos
Automate HDP installation using Ambari Blueprints – Part 5
User authentication from Windows Workstation to HDP Realm Using MIT Kerberos Client (with Firefox)
How to configure zeppelin livy interpreter for secure HDP cluster
Configure Mac and Firefox to access HDP/HDF SPNEGO UI
Adding KDC Administrator Credentials to the Ambari Credential Store
Connect Hadoop client on Mac OS X to Kerberized HDP cluster
Oozie kerberized java action to query HiveServer2 using JDBC
Secure Kafka Java Producer with Kerberos
A Secure HDFS Client Example
Apache NiFi 1.0.0 Kerberos Authentication
Auth-to-local Rules Syntax
Demystifying Delegation Token
NiFi Security: User Authentication with Kerberos
Starting Spark jobs via REST API on a kerberized cluster
Access Kerberos cluster from JAVA using cached ticket
Sample Application to write to a Kerberised HBase
Secure Kafka Java Producer with Kerberos

【Ranger】
Secure HDP 2.3 with Apache Ranger
Row Filter and Column Masking(单元格级安全性)
Bring Spark to the Business with Fine Grain Security
HDF 2.0: Apache Nifi integration with Apache Ambari/Ranger
HDF 2.0: Apache Nifi integration with Apache Ambari/Ranger
Tag Based (Meta Data) Security for Apache Spark with LLAP, Atlas, and Ranger
Customizing Ranger Policies with Dynamic Context
Create dynamic row level filter in Ranger(按当前用户过滤)
Apache Ranger and HDFS(可用于多租户)
Installing Apache Ranger with Ambari Postgresql
Automatically deleting Ranger Users from MySQL and Postgres After Using REST API(自动删除Ranger用户相关元数据)
Rest call to ranger on wire encrypted cluster(cert)
Best Practices In HDFS Authorization with Apache Ranger
Apache Ranger and Yarn setup - Security
【knox】
How to enable Knox proxying for Zeppelin
Configure Knox to access HBASE UI
How to pass Hive configuration parameters to Knox via cURL
Adding Livy Server as service to Apache Knox
Integrating KNOX with LoadBalancer
Configure Knox to access HDFS UI
Configure Knox to access AmbarI UI
Demystify Knox, LDAP, SSL, CA Cert integration
Replace Knox self-signed certificate with CA certificate
Tips on troubleshooting WebHbase connection through Knox.
Starting Spark jobs via REST API on a kerberized cluster

【Ambari】
How to install ambari-shell
Ambari: LDAP to CSV - Quick one liner
Ambari database tables ER Diagram - Ambari 2.4.2
Automate HDP installation using Ambari Blueprints – Part 5
Using Ansible to deploy HDP on AWS
HDF-2.0 LDAP User Authentication with Ambari
HDF 2.0: Apache Nifi integration with Apache Ambari/Ranger
Visualize Cluster and Service Allocation - Reloaded!
Ambari and HDP Installation: Quick Start for new Virtual Machine Users(入门)
When Webserver is installed in front to the Ambari then PigView does not load Script
Ambari shows HDP services to be down whereas they aren't!
Install HDB (HAWQ) via Ambari and use Zeppelin for visualization
HowTo install and configure high availability on Atlas?
hadoop logs push to Kafka topic using Ambari LogSearch
Enable HTTPS for Ambari using JKS
Installing Apache Ranger with Ambari Postgresql
Interacting with HDFS using native linux commands through an NFS Gateway

【Falcon】
Falcon Hive Integration
APACHE FALCON简介:HADOOP的数据治理

【Hbase】
Configure Knox to access HBASE UI
Creating HBase HFiles From a Hive Table
Import CSV data into HBase using importtsv
Start and test HBASE thrift server in a kerberised environment
Connecting to HBase in a Kerberos Enabled Cluster

Copy as Markdown for Chrome & Firefox

https://github.com/chitsaou/copy-as-markdown/

Ambari Blueprints wiki

home

User authentication from Windows Workstation to HDP Realm Using MIT Kerberos Client (with Firefox)

原文

Ambari装HDF

HDF官文
安装脚本:

$ ambari-server install-mpack --mpack=http://repo.imaicloud.com/HDF/ubuntu14/2.x/updates/2.0.0.0/tars/hdf_ambari_mp/hdf-ambari-mpack-2.0.0.0-579.tar.gz --purge --verbose
ERROR: Exiting with exit code 1.
REASON: This Ambari instance is already managing the cluster hdp1 that has the HDP-2.5 stack installed on it. The management pack you are attempting to install only contains stack definitions for [HDF]. Since this management pack does not contain a stack that has already being deployed by Ambari, the --purge option would cause your existing Ambari installation to be unusable. Due to that we cannot install this management pack.
Mpack installation checker failed!

community.hortonworks.com

Sandbox & Learning,文章140,

maven代理配置

参考
maven配置文件位置~/.m2/settings.xml。将下列内容放入该配置文件:

<settings>
  <proxies>
   <proxy>
      <id>example-proxy</id>
      <active>true</active>
      <protocol>http</protocol>
      <host>proxy.example.com</host>
      <port>8080</port>
      <username>proxyuser</username>
      <password>somepassword</password>
      <nonProxyHosts>www.google.com|*.example.com</nonProxyHosts>
    </proxy>
  </proxies>
</settings>

时间同步NTPD

hbase启动错误:Clock skew too great。经查,三个节点的时间不一致,其中u1401的时间明显不对。三个节点都安装了ntpd服务,而且都运行中。

$ date                       (date命令用于查看当前系统时间)
Fri Jun 23 07:31:06 UTC 2017

如果系统时间照互联网时间差距太大,ntpd也不会自动修改时间。ntpd的配置文件是/etc/ntp.conf。从里面可以查到ubuntu的时间服务器:

server 0.ubuntu.pool.ntp.org
server ntp.ubuntu.com

现在停止ntpd服务,手工同步时间,然后再启动时间服务:

$ ssh u1401                             (登录到u1401)
$ service ntpd stop                     (停止ntp服务)
$ ntpdate 0.ubuntu.pool.ntp.org         (手工同步时间与互联网服务器一致)
23 Jun 07:25:05 ntpdate[7416]: adjust time server 101.6.6.172 offset 0.017583 sec
$ service ntpd start                    (重新启动ntp服务)

现在三台VM的时间一致了。 如果想让ntp无视本机与互联网时间的差异,强制同步时间,则在/etc/ntp.conf配置文件中增加一行:

tinker panic 0

github pages 固定链接

---
permalink: /mypageurl/
---

centos7 yum元数据错误

/etc/yum.repos.d/目录下增加了CentOS7-Base-163.repo,删除了 CentOS-Base.repo。

$ yum --enablerepo=HDP-2.6 clean metadata
$ yum --enablerepo=HDP-2.6.1.0 clean metadata
$ yum --enablerepo=ambari-2.5.3.0 clean metadata
$ yum install hadoop_2_6_1_0_129

问题:Delta RPMs disabled because /usr/bin/applydeltarpm not installed. 解决办法:

$ yum provides '*/applydeltarpm'
$ yum install deltarpm

问题:Package does not match intended download

$ yum clean all
$ yum install xxxx

/e/vagrant11/ambari-vagrant/centos7.3

c7301.1 HDFS (kerberized)
c7301.2 增加yarn/hive/knox
c7301.2a halt后save
c7301.3 增加ranger/hbase
c7301.4 增加zeppelin,spark(livy/thrift),spark2
c7301.5 增加oozie
c7301.6 启用SSL前的备份

python-pip

$ yum install epel-release
$ yum install python-pip

pip install pyhs2

$ pip install pyhs2报告找不到cc1plus:

$ find / -name cc1plus
/usr/libexec/gcc/x86_64-redhat-linux/4.8.2/cc1plus
$ export PATH=$PATH:/usr/libexec/gcc/x86_64-redhat-linux/4.8.2/cc1plus

然后$ pip install pyhs2报告找不到sasl/sasl.h

$ yum install cyrus-sasl-devel.x86_64
$ pip install pyhs2 (成功)

yum install cyrus-sasl-gssapi cyrus-sasl-md5 cyrus-sasl-plain cyrus-sasl-devel

serverless

出处
在开源领域,其实还有 OpenWhisk、OpenLambda、Serverless framework、Iron.io 等项目。
Serverless 可以说是一种流程、一种工具或是一种架构,而 FaaS 属于 Serverless 的子集。Serverless 包含了 FaaS、BaaS 这两个概念,FaaS 即 Function as a Service 函数即服务,BaaS 即 Backend as a Service 后端即服务。FaaS 是云化的函数,把函数放到云端,通过云进行函数级别的调度、弹性;BaaS 指的是各种云化的产品和服务,云存储、云数据库、云监控、云告警,都可以囊括在 BaaS 里面。

vagrant启动c7303虚拟机报错,双内核导致

查看/var/log/vboxadd-install.log:

tmp/vbox.0/Makefile.include.header:94: *** Error: unable to find the 
sources of your current Linux kernel. Specify KERN_DIR=<directory> and 
run Make again.  Stop.

查看内核清单:

$ rpm -qa | grep kernel | sort
kernel-3.10.0-514.26.2.el7.x86_64
kernel-3.10.0-514.el7.x86_64
kernel-headers-3.10.0-514.26.2.el7.x86_64
kernel-tools-3.10.0-514.26.2.el7.x86_64
kernel-tools-libs-3.10.0-514.26.2.el7.x86_64

对比另外两台VM的内核,发现第二个内核是多余的。卸载这个内核:

$ yum remove kernel-3.10.0-514.el7.x86_64 -y
$ yum update -y
$ yum install kernel-devel

注意kernel-devel、kernel、kernel-headers三者的版本号要一样。
然后通过vagrant重启虚拟机,GuestAdditions不再报错了。

Pig

  1. 提示policy同步失败,因为ranger没有启动。ranger启动后问题解决。
  2. File does not exist: /user/${username}/pig/jobs/job_id/stdout and stderr。 参考这个这个

hadoop-examples-1.1.0-SNAPSHOT.jar

参考

Mit的多个DLL复制到system32目录下后

E:\opt\JAAS>klist
当前登录 ID 是 0:0x60db1
缓存的票证: (2)
#0>     客户端: wbwang @ HOME.LANGCHAO.COM
        服务器: krbtgt/HOME.LANGCHAO.COM @ HOME.LANGCHAO.COM
        Kerberos 票证加密类型: RSADSI RC4-HMAC(NT)
        票证标志 0x40e10000 -> forwardable renewable initial pre_authent name_canonicalize
        开始时间: 7/26/2017 14:06:14 (本地)
        结束时间:   7/27/2017 0:06:14 (本地)
        续订时间: 8/2/2017 14:06:14 (本地)
        会话密钥类型: AES-256-CTS-HMAC-SHA1-96
        缓存标志: 0x1 -> PRIMARY
        调用的 KDC: JTJNDC007.home.langchao.com

#1>     客户端: wbwang @ HOME.LANGCHAO.COM
        服务器: LDAP/jtjndc009.home.langchao.com/home.langchao.com @ HOME.LANGCHAO.COM
        Kerberos 票证加密类型: AES-256-CTS-HMAC-SHA1-96
        票证标志 0x40a50000 -> forwardable renewable pre_authent ok_as_delegate name_canonicalize
        开始时间: 7/26/2017 14:06:51 (本地)
        结束时间:   7/27/2017 0:06:14 (本地)
        续订时间: 8/2/2017 14:06:14 (本地)
        会话密钥类型: AES-256-CTS-HMAC-SHA1-96
        缓存标志: 0
        调用的 KDC: JTJNDC007.home.langchao.com

update-ca-trust

参考

HDP SSL

$ openssl req -new -newkey rsa:2048 -nodes -keyout c7301.ambari.apache.org.key -out c7301.csr -subj "/C=CN/ST=Shan Dong/L=Ji Nan/O=Inspur/OU=SBG/CN=c7301.ambari.apache.org"
$ openssl req -new -newkey rsa:2048 -nodes -keyout c7302.ambari.apache.org.key -out c7302.csr -subj "/C=CN/ST=Shan Dong/L=Ji Nan/O=Inspur/OU=SBG/CN=c7302.ambari.apache.org"
$ openssl req -new -newkey rsa:2048 -nodes -keyout c7303.ambari.apache.org.key -out c7303.csr -subj "/C=CN/ST=Shan Dong/L=Ji Nan/O=Inspur/OU=SBG/CN=c7303.ambari.apache.org"
$ openssl ca -in c7301.csr -out c7301.ambari.apache.org.crt
$ openssl ca -in c7302.csr -out c7302.ambari.apache.org.crt
$ openssl ca -in c7303.csr -out c7303.ambari.apache.org.crt

KMS

Ranger KMS DB: user:rangerkms, db:rangerkms, passwd:vagrant Ranger KMS Root DB: Database Administrator (DBA) username:root, passwd:vagrant, KMS master key password:1

user/group

$ groupadd <group-name>                     (增加用户组)
$ useradd <user-name> -g <group-name>       (增加用户并指定用户组)
$ userdel <user-name>                       (删除用户)
$ groupdel <group-name>                     (删除组)

Using Knox with ActiveDirectory

原文

红帽安全文档

地址

htpasswd

centos安装:

$ yum install httpd-tools
$ htpasswd -n webb

htpasswd生成密码文件的格式说明
相类比的/etc/shadow文件的密码格式

Ambari-SSSD-Service

https://github.com/abajwa-hw/sssd-service

blog raw

{% raw %}
     不被jekyll解析的内容
{% endraw %}

Deploying Hortonworks Sandbox on VirtualBox

安装导引;
下载sandbox镜像官方地址浪潮内网地址
在Oracle VirtualBox中导入下载的镜像(.ova)即可。需要8G内存。
在windows的host文件中加入:

127.0.0.1   localhost   sandbox.hortonworks.com

Netcat

Netcat (often abbreviated to nc) is a computer networking utility for reading from and writing to network connections using TCP or UDP.
参考

Ambari NIFI service

Adding NiFi Server to HDP 2.5 Sandbox

postgreSQL jdbc(java连接)

原文 类TestMain.java:

import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.sql.DriverManager;
import  org.postgresql.Driver;

public class TestMain
{
    public static void main(String[] args)
    {
        Connection conn = getConn();
        String sql="select * from student";
        Statement stmt=null;
        ResultSet rs=null;
        try
        {
            stmt=conn.createStatement();
            rs=stmt.executeQuery(sql);
            while(rs.next()){
                System.out.println(rs.getInt(1));
            }
        }
        catch (SQLException e)
        {
            e.printStackTrace();
        }
    }
    public static Connection getConn()
    {
        Connection conn = null;
        try
        {
            Class.forName("org.postgresql.Driver");
            String url = "jdbc:postgresql://c7301.ambari.apache.org:5432/kong";
            try
            {
                conn = DriverManager.getConnection(url, "kong", "vagrant");
            }
            catch (SQLException e)
            }
        }
        catch (ClassNotFoundException e)
        {
            e.printStackTrace();
        }
        return conn;
    }
}

下载jdbc驱动:postgresql-42.1.4.jar
编译、执行:

$ javac -cp "./postgresql-42.1.4.jar" TestMain.java
$ java -cp ".:./postgresql-42.1.4.jar" TestMain

postgreSQL

v9.6 + centos7.3

$ yum install https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-centos96-9.6-3.noarch.rpm
$ yum install postgresql96-server
$ /usr/pgsql-9.6/bin/postgresql96-setup initdb
$ systemctl enable postgresql-9.6
$ systemctl start postgresql-9.6

ubuntu桌面

E:\vagrant9\ambari-vagrant\ubuntu14.4

  config.vm.define :u1407 do |u1407|
    u1407.vm.hostname = "u1407.ambari.apache.org"
    u1407.vm.network :private_network, ip: "192.168.14.107"
    u1407.vm.box = "box-cutter/ubuntu1404-desktop"
    u1407.vm.provider :virtualbox do |vb|
      vb.customize ["modifyvm", :id, "--memory", 2048] # RAM allocated to each VM
        end
  end

box使用ubuntu桌面版,启动后却出不来桌面. 按这个文档的说法安装桌面:

sudo apt-get install ubuntu-desktop

果然这个box默认没有装桌面包,不知道为啥.

另一个ubuntu桌面bstoots/xubuntu-16.04-desktop-amd64

使用vagrant添加box的命令:

vagrant box add bstoots/xubuntu-16.04-desktop-amd64

上面的命令可以,但如果添加box-cutter/ubuntu1404-desktop则提示找不到。

openssl sha256

利用命令行openssl计算某输入的哈希值(sha256).

$ openssl sha256
1(stdin)= 6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b

输入1后连按两次ctrl+D组合键就可以计算出字符串1的哈希值。
可以到这个 http://tool.oschina.net/encrypt?type=2 在线加密解密网页去验证。

电商大数据平台官方博客

github wiki中自动添加目录的chrome插件: Google Chrome / Chromium

yum手工下载软件包

参考

Elasticsearch文档

elasticsearch中文权威指南.pdf ElasticSearchManual.pdf

http_load

home

$ cd /opt
$ wget https://acme.com/software/http_load/http_load-09Mar2016.tar.gz
$ tar xzvf http_load-09Mar2016.tar.gz
$ cd  http_load-09Mar2016
$ make
$ make install

编辑一个文本文件,如t.tt,内容:

http://localhost:8080/

然后运行

$ http_load -p 10 -s 5 t.tt
94720 fetches, 10 max parallel, 1.8944e+06 bytes, in 5 seconds
20 mean bytes/connection
18944 fetches/sec, 378880 bytes/sec
msecs/connect: 0.0958101 mean, 0.772 max, 0.014 min
msecs/first-response: 0.293987 mean, 0.988 max, 0.232 min
HTTP response codes:
  code 200 -- 94720

-p是并发用户数,'-s'是执行时长。

What is HMAC Authentication and why is it useful?

here

java base64

原文

// 编码
String asB64 = java.util.Base64.getEncoder().encodeToString("some string".getBytes("utf-8"));
// 解码
byte[] asBytes = java.util.Base64.getDecoder().decode("c29tZSBzdHJpbmc=");
// URL编码
String urlEncoded = java.util.Base64.getUrlEncoder().encodeToString("subjects?abcd".getBytes("utf-8"));

Confluent REST Proxy for Kafka

https://github.com/confluentinc/kafka-rest

浏览器中的sql笔记本 Franchise

https://franchise.cloud

A collection of useful resources for building RESTful HTTP+JSON APIs.

https://github.com/yosriady/api-development-tools

awesome useful-java-links

https://github.com/Vedenin/useful-java-links

Awesome_APIs 包括国内API服务清单

https://github.com/TonnyL/Awesome_APIs

awesome rest

https://github.com/marmelab/awesome-rest

vagrant配置端口转发

Vagrant.configure("2") do |config|
  config.vm.network "forwarded_port", guest: 80, host: 8080
end

openstack all in one

在(https://app.vagrantup.com/boxes/search)中搜索openstack发现了stackinabox/openstack这个vagrant box。它号称将这个openstack装入了这一个盒子中,操作系统是ubuntu16。

⚠️ **GitHub.com Fallback** ⚠️