Binder: ServiceManager的创建
承接 Binder: addService初探 这篇文章,我们已经知道 Client
端通过 BpBinder
的 transact
方法与 service
端进行通信,在 BpBinder
的 transact
方法中又通过 IPCThreadState
的 transact
方法将数据传递到 service
端。
最终来到 IPCThreadState
的 writeTransactionData
方法
frameworks/native/libs/binder/IPCThreadState.cpp
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) { binder_transaction_data tr; // 将数据封装到tr中 tr.target.ptr = 0; tr.target.handle = handle; // handle = 0, 定位到ServiceManager tr.code = code; // 操作码 ADD_SERVICE_TRANSACTION tr.flags = binderFlags; tr.cookie = 0; tr.sender_pid = 0; tr.sender_euid = 0; const status_t err = data.errorCheck(); if (err == NO_ERROR) { tr.data_size = data.ipcDataSize(); tr.data.ptr.buffer = data.ipcData(); tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t); tr.data.ptr.offsets = data.ipcObjects(); } else if (statusBuffer) { tr.flags |= TF_STATUS_CODE; *statusBuffer = err; tr.data_size = sizeof(status_t); tr.data.ptr.buffer = reinterpret_cast(statusBuffer); tr.offsets_size = 0; tr.data.ptr.offsets = 0; } else { return (mLastError = err); } mOut.writeInt32(cmd); // 指令码 BC_TRANSACTION // 写入数据,进行数据传递 mOut.write(&tr, sizeof(tr)); return NO_ERROR; }
在传递数据的过程中,通过 handle = 0
来定位到 service
的 service_manager
。
下面我们来分析一下 ServiceManager
的创建过程。
ServiceManager
ServiceManager
是伴随着Android init 启动一起被创建的,在 init.rc
文件中进行声明的。
其所对应的可执行程序是 /system/bin/servicemanager
,所对应的源文件是 service_manager.c
,进程名为 /system/bin/servicemanager
。
service servicemanager /system/bin/servicemanager class core user system group system critical onrestart restart healthd onrestart restart zygote onrestart restart media onrestart restart surfaceflinger onrestart restart drm
所以启动 ServiceManager
的入口在 service_manager.c
的 main
方法中
main
frameworks/native/cmds/servicemanager/service_manager.c
int main(int argc, char** argv) { struct binder_state *bs; union selinux_callback cb; char *driver; if (argc > 1) { driver = argv[1]; } else { driver = "/dev/binder"; } // 打开binder驱动 bs = binder_open(driver, 128*1024); ... // 将ServiceManager设置成binder的守护者 if (binder_become_context_manager(bs)) { ALOGE("cannot become context manager (%s)\n", strerror(errno)); return -1; } ... // 开启binder循环,监听数据的到来 binder_loop(bs, svcmgr_handler); return 0; }
在 main
方法中主要做了三件事
-
binder_open binder 128kb
-
binder_become_context_manager ServiceManager binder
-
通过
binder_loop
开启
binder
循环,监听数据
binder_open
frameworks/native/cmds/servicemanager/binder.c
struct binder_state *binder_open(const char* driver, size_t mapsize) { struct binder_state *bs; struct binder_version vers; bs = malloc(sizeof(*bs)); if (!bs) { errno = ENOMEM; return NULL; } // 开启binder驱动 bs->fd = open(driver, O_RDWR | O_CLOEXEC); if (bs->fd < 0) { fprintf(stderr,"binder: cannot open %s (%s)\n", driver, strerror(errno)); goto fail_open; } // 获取binder版本信息 if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) || (vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) { fprintf(stderr, "binder: kernel driver version (%d) differs from user space version (%d)\n", vers.protocol_version, BINDER_CURRENT_PROTOCOL_VERSION); goto fail_open; } // 设置mmap映射大小128kb bs->mapsize = mapsize; // 设置内存映射地址 bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0); if (bs->mapped == MAP_FAILED) { fprintf(stderr,"binder: cannot map device (%s)\n", strerror(errno)); goto fail_map; } return bs; fail_map: close(bs->fd); fail_open: free(bs); return NULL; }
这里主要对 bs
结构体的三个变量进行赋值,它是 binder_state
类型的结构体
struct binder_state { int fd; // dev/binder 文件描述符 void *mapped; // 映射的内存地址 size_t mapsize; // 映射的大小 };
所以 binder_open
主要做的事情是
-
打开
binder
驱动 -
验证
binder
版本信息 -
设置
mmap
内存映射大小,默认为
128kb
-
设置
mmap
内存映射的地址
binder_become_context_manager
frameworks/native/cmds/servicemanager/binder.c
int binder_become_context_manager(struct binder_state *bs) { struct flat_binder_object obj; // 初始化obj memset(&obj, 0, sizeof(obj)); obj.flags = FLAT_BINDER_FLAG_TXN_SECURITY_CTX; // 与binder驱动进行数据通信 int result = ioctl(bs->fd, BINDER_SET_CONTEXT_MGR_EXT, &obj); if (result != 0) { android_errorWriteLog(0x534e4554, "121035042"); // 与binder驱动进行数据通信 result = ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0); } return result; }
在 binder_become_context_manager
方法中,通过 ioctl
与 binder
驱动进行通信,并传入数据 0
作为标识,将 ServiceManager
设置为 binder
的守护者,用来统一处理 binder
的数据传输。
binder_loop
frameworks/native/cmds/servicemanager/binder.c
void binder_loop(struct binder_state *bs, binder_handler func) { int res; struct binder_write_read bwr; uint32_t readbuf[32]; bwr.write_size = 0; bwr.write_consumed = 0; bwr.write_buffer = 0; readbuf[0] = BC_ENTER_LOOPER; // 写入数据 binder_write(bs, readbuf, sizeof(uint32_t)); // 开启循环,监听数据的到来 for (;;) { bwr.read_size = sizeof(readbuf); bwr.read_consumed = 0; bwr.read_buffer = (uintptr_t) readbuf; // 获取binder驱动中的数据 res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); if (res < 0) { ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno)); break; } // 解析数据 res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func); if (res == 0) { ALOGE("binder_loop: unexpected reply?!\n"); break; } if (res < 0) { ALOGE("binder_loop: io error %d %s\n", res, strerror(errno)); break; } } }
在 binder_loop
中主要做了三件事:
-
binder_write BC_ENTER_LOOPER binder
-
开启循序,通过
ioctl
监听并读取数据 -
一旦读取到数据,将通过
binder_parse
来进一步解析
binder_write
frameworks/native/cmds/servicemanager/binder.c
int binder_write(struct binder_state *bs, void *data, size_t len) { struct binder_write_read bwr; int res; // 将数据填充到bwr中 bwr.write_size = len; bwr.write_consumed = 0; bwr.write_buffer = (uintptr_t) data; bwr.read_size = 0; bwr.read_consumed = 0; bwr.read_buffer = 0; // 传输数据 res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); if (res < 0) { fprintf(stderr,"binder_write: ioctl failed (%s)\n", strerror(errno)); } return res; }
这里主要是将数据统一封装到 bwr
中, bwr
是 binder_write_read
的结构体,当写数据时会将数据写入到 write_buffer
中,而当读数据时会从 read_buffer
中读取数据。所以这是一个支持双向读写操作的数据源。以便可以通过 ioctl
与 binder
驱动进行读写操作。
由于是首次进入且即将进入循序操作,所以第一次会传递 BC_ENTER_LOOPER
指令码,通知 binder
进行循环操作。
所以通过 ioctl
发送 BINDER_WRITE_READ
的通信 code
,将 bwr
传递给 binder
驱动。
binder_parse
frameworks/native/cmds/servicemanager/binder.c
int binder_parse(struct binder_state *bs, struct binder_io *bio, uintptr_t ptr, size_t size, binder_handler func) { int r = 1; // 获取数据截止位的内存地址 uintptr_t end = ptr + (uintptr_t) size; while (ptr < end) { uint32_t cmd = *(uint32_t *) ptr; ptr += sizeof(uint32_t); switch(cmd) { ... case BR_TRANSACTION: { struct binder_transaction_data_secctx txn; ... if (func) { unsigned rdata[256/4]; struct binder_io msg; struct binder_io reply; int res; bio_init(&reply, rdata, sizeof(rdata), 4); bio_init_from_txn(&msg, &txn.transaction_data); // 调用func,对应的是svcmgr_handler res = func(bs, &txn, &msg, &reply); if (txn.transaction_data.flags & TF_ONE_WAY) { binder_free_buffer(bs, txn.transaction_data.data.ptr.buffer); } else { // 发送reply binder_send_reply(bs, &reply, txn.transaction_data.data.ptr.buffer, res); } } break; } ... case BR_REPLY: { ... break; } default: ALOGE("parse: OOPS %d\n", cmd); return -1; } } return r; }
binder_parse
中主要是解析 binder
信息,参数 ptr
指向 BC_ENTER_LOOPER
, func
指向 svcmgr_handler
。所以一旦请求到来,会调用 svcmgr_handler
,并将处理的结构通过 binder_send_reply
返回会给 client
端。这个对应的就是之前文章中说的 BC_REPLAY
。
这个 svcmgr_handler
是在最外面的 binder_loop
传递过来的。
svcmgr_handler
frameworks/native/cmds/servicemanager/service_manager.c
int svcmgr_handler(struct binder_state *bs, struct binder_transaction_data_secctx *txn_secctx, struct binder_io *msg, struct binder_io *reply) { struct svcinfo *si; uint16_t *s; size_t len; uint32_t handle; uint32_t strict_policy; int allow_isolated; uint32_t dumpsys_priority; struct binder_transaction_data *txn = &txn_secctx->transaction_data; switch(txn->code) { case SVC_MGR_GET_SERVICE: case SVC_MGR_CHECK_SERVICE: s = bio_get_string16(msg, &len); if (s == NULL) { return -1; } // 查找service handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid, (const char*) txn_secctx->secctx); if (!handle) break; bio_put_ref(reply, handle); return 0; case SVC_MGR_ADD_SERVICE: s = bio_get_string16(msg, &len); if (s == NULL) { return -1; } handle = bio_get_ref(msg); allow_isolated = bio_get_uint32(msg) ? 1 : 0; dumpsys_priority = bio_get_uint32(msg); // 注册service if (do_add_service(bs, s, len, handle, txn->sender_euid, allow_isolated, dumpsys_priority, txn->sender_pid, (const char*) txn_secctx->secctx)) return -1; break; case SVC_MGR_LIST_SERVICES: { uint32_t n = bio_get_uint32(msg); uint32_t req_dumpsys_priority = bio_get_uint32(msg); if (!svc_can_list(txn->sender_pid, (const char*) txn_secctx->secctx, txn->sender_euid)) { ALOGE("list_service() uid=%d - PERMISSION DENIED\n", txn->sender_euid); return -1; } si = svclist; // 遍历service while (si) { if (si->dumpsys_priority & req_dumpsys_priority) { if (n == 0) break; n--; } si = si->next; } if (si) { bio_put_string16(reply, si->name); return 0; } return -1; } default: ALOGE("unknown code %d\n", txn->code); return -1; } bio_put_uint32(reply, 0); return 0; }
svcmgr_handler
主要是对 service
的操作处理,例如之前文章中提到的 addService
操作,最终都会在 SVC_MGR_ADD_SERVICE
中进行处理。
在 SVC_MGR_ADD_SERVICE
中会通过 do_add_service
方法来注册 service
。
do_add_service
frameworks/native/cmds/servicemanager/service_manager.c
int do_add_service(struct binder_state *bs, const uint16_t *s, size_t len, uint32_t handle, uid_t uid, int allow_isolated, uint32_t dumpsys_priority, pid_t spid, const char* sid) { struct svcinfo *si; if (!handle || (len == 0) || (len > 127)) return -1; // 检查是否能够注册该service if (!svc_can_register(s, len, spid, sid, uid)) { ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n", str8(s, len), handle, uid); return -1; } // 查找是否已经注册了 si = find_svc(s, len); if (si) { //已经注册 if (si->handle) { ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n", str8(s, len), handle, uid); svcinfo_death(bs, si); } si->handle = handle; } else { //没有注册 // 申请内存 si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t)); if (!si) { ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n", str8(s, len), handle, uid); return -1; } si->handle = handle; si->len = len; memcpy(si->name, s, (len + 1) * sizeof(uint16_t)); si->name[len] = '\0'; si->death.func = (void*) svcinfo_death; si->death.ptr = si; si->allow_isolated = allow_isolated; si->dumpsys_priority = dumpsys_priority; si->next = svclist; // 保存到svclist表中 svclist = si; } binder_acquire(bs, handle); binder_link_to_death(bs, handle, &si->death); return 0; }
在 do_add_service
中首先会检查该注册的 service
是否能够注册,然后再出查询现有的 svclist
中是否存在该 service
;如果不存在就为该 service
申请内存空间,最后在加入到 svclist
注册表中。
至此整个 ServiceManager
的流程就分析完了,我这里做个总结:
-
binder_open binder mmap 128kb
-
binder_become_context_manager ServiceManager binder 0
-
通过
binder_loop
开启循环,等待与监听
client
端传递过来的数据 -
在数据监听的过程中,使用
binder_write
通知
binder
进行循环 -
通过
ioctl
来与
binder
驱动进行数据读写 -
binder_parse BR_ reply BC_ client
-
将解析的数据回调给
svcmgr_handler
进行统一逻辑处理,包括
service
的注册、查找、验证等操作 -
ServiceManager service svclist