spark read from kafka process and write kafka

读取kafka用spark streaming做处理 https://www.cnblogs.com/zhangXingSheng/p/6646879.html 使用spark DStream的foreachRDD时要注意哪些坑? https://www.cnblogs.com/realzjx/p/5853094.html kafka学习笔记 — Scala实现Kafka producer 和 consumer https://blog.csdn.net/u012965373/article/details/74548388 sparkStreaming集成Kafka https://blog.csdn.net/u012373815/article/details/53454669

docker update aliyun source

安装/升级你的Docker客户端 推荐安装1.10.0以上版本的Docker客户端,参考文档 docker-ce 如何配置镜像加速器 针对Docker客户端版本大于1.10.0的用户 您可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器: sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-‘EOF’ { “registry-mirrors”: [“https://cl9ahkai.mirror.aliyuncs.com”] } EOF sudo systemctl daemon-reload sudo systemctl restart docker

yum updata aliyun source

https://blog.csdn.net/u014008779/article/details/78563730 1、备份 mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup 2、下载新的CentOS-Base.repo 到/etc/yum.repos.d/ CentOS 5 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-5.repo CentOS 6 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo CentOS 7 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 3、之后运行yum makecache生成缓存 yum makecache

docker ali source accelerate

安装/升级你的Docker客户端 推荐安装1.10.0以上版本的Docker客户端,参考文档 docker-ce 如何配置镜像加速器 针对Docker客户端版本大于1.10.0的用户 您可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器: sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-‘EOF’ { “registry-mirrors”: [“https://cl9ahkai.mirror.aliyuncs.com”] } EOF sudo systemctl daemon-reload sudo systemctl restart docker

docker spark cluster 7077

https://blog.csdn.net/yang1464657625/article/details/78798042 Spark单机模式独立部署 start-slave.sh spark://192.168.1.136:7077   (这里的IP地址要填写宿主IP地址)   docker run -it -p 8088:8088 -p 4040:4040 -p 7077:7077 -p 8080:8080 -p 6066:6066 yanggf008/spark:2.1.0 bash docker commit -m=”bootstrap.sh changed” -a=”yanggf008″ 630b yanggf008/spark2.1.0:vspark Docker 退出容器但不关闭当前容器 方法一:如果要正常退出不关闭容器,请按Ctrl+P+Q进行退出容器 方法二:如果使用exit退出,那么在退出之后会关闭容器,可以使用下面的流程进行恢复 使用docker restart命令重启容器 使用docker attach命令进入容器 重启httpd(service httpd restart)和radosgw(/etc/init.d/ceph-radosgw restart),并且使用wget验证是否将radosgw重启成功(wget http://127.0.0.1) https://blog.csdn.net/farawayzheng_necas/article/details/54341036 创建Spark 2.1.0 Docker镜像

centos7 install docker failed

[root@Master ~]# systemctl status docker.service ● docker.service – Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since 二 2018-05-22 14:50:15 CST; 7s ago Docs: http://docs.docker.com Process: 3478 ExecStart=/usr/bin/dockerd-current –add-runtime docker-runc=/usr/libexec/docker/docker-runc-current –default-runtime=docker-runc –exec-opt native.cgroupdriver=systemd –userland-proxy-path=/usr/libexec/docker/docker-proxy-current –init-path=/usr/libexec/docker/docker-init-current –seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES (code=exited, status=1/FAILURE) Main PID: 3478…

docker rmi error image is being used by running container

reference:http://www.jb51.net/article/102168.htm # docker rmi e934aafc2206 Error response from daemon: conflict: unable to delete e934aafc2206 (cannot be forced) – image is being used by running container 1a6c11fd49b8 无法删除docker image 解决方法: 1.先查询记录 docker ps -a 2.把该镜像的记录全部删除掉,如果删除所有镜像的记录,可以使用:docker ps -a|awk ‘{print $1}’|xargs docker rm 3.docker rmi 5e4f2da203e2就可以了 网上还有网友使用service docker restrat 之后再删,本人试了一下不管用,可能是因为环境不一样,哈哈!

docker copy file from container to host, from host to container on windows

xshell connect to 192.168.99.100      user: docker    pass:tcuser docker ps     find the container id (from container to host)  docker cp containerId:/usr/elasticsearch.tar.gz /usr (from host to container)    docker exec -i 6c3d52a9763b bash -c ‘cat > /tmp/elasltic.tar’ < /usr/elasticsearch-5.5.0.tar.gz                            …